In my case it takes 3 to 4 days ,More time required for the library creation and fine tuning the outputs...
Thanks and Rgards
Much thanks~. I only means Valor NPI runtime solely, not the time of work process. There are some workarounds to make runtime decreasing such like runing contour cleaning etc, but not always valid.
Oops.. The question makes me confused ...In my case also there is some issue while running complicated board ,sometimes it shows memory allocation problem. Do you mean to disable some unwanted checks from the design to speed the process ?
Thanks and Regards
No, I just say what longest run-time Valor NPI takes you have ever seen during your daily work. It's only about Valor NPI's performance.
Below is a snapshoot of Valor NPI while I am translating Allegro database for a 38L backplane board, I guess it will take 90 minutes to get translation finished and fab-power and groud check shall takens very long time to finish. Based on my exeriences, you may speed up the process by running contour-clean operation before running fab-power and ground check.
I will update the status after this work get done late.
Okay ,I never did this much complicated board ,but there is a chance to come in future . As you said ,please send the information after fixing this issues.
Thanks and Regards
Have you made comparisons between 32-bit and 64-bit processing, using the latest version (v9.4)? I would recommend trying that also.
Valor NPI 9.4 is ok to solve issue about memory Allocation limition of 32-bit version and we seen some speedups on normal designs. However, it still need very long time to running some of checks on large design and we found enlarging/shap netlist generating consumed most of total times. i suspect it is caused by tiny slivers.
After trunning several testing vehicles, it seems confirmed that very tiny slivers is the cause that making Valor NPI need much times to enlarge plane layers.
There are a number of factors that can contribute to the length of time any analysis takes to complete.
First and foremost is the method used in the construction of the product model. Contributing factors in this area pertain to the actual number of board layers, multiple drill layers that include blind, buried and backdrill and full of drawn surface regions are examples that can impact the amount of resources analysis will require.
Following that is the care taken in creating the ERF models used during analysis. There are cases where a user not understanding the principles behind ERF modeling creates a condition resulting in excessive analysis time without consideration if the results are as intended. As an example, there are many parameters that determine the search radius the analysis should use. Reviewing Signal Layer Analysis one will find a spacing parameter. This parameter value determines for each feature on a layer the search radius used in order to establish a measurement. Please note that I did not indicate to find a spacing issue, but establish a measurement. Next there are ERF model ranges that divide the measurements into the traditional red, yellow, green and blue ranges.
There have been cases where a user established such high spacing parameter that the majority of the established measurements were in the green or more likely the blue range. Consider the impact on the system performance in this situation. If the major of the measurements being established are green or even worse blue, then consider the vast amount of consumed resources not only in execution time but memory as well. Depending on the system configuration I have seen hundreds of thousands of results in the blue category and a trivial amount in really valid ranges. What I am hoping to convey is the importance in understanding and determining valid ERF modeling content will certainly impact performance.
In my experience the last area the can impact performance is actually the system or bit size of the application. While I know it is important and it will certainly impact performance in all aspects, I suggest that all this does is provide more resources to manage the content of the product model and then masks possible ERF model parameter inefficiencies rather than really improve on the analysis process. The reason I believe this to be true is I have seen product models that individuals indicate will complete analysis with Valor NPI and I find improving the ERF models in use enabled the analysis to complete. While that is certainly not true all the time, a fair amount of the time this is the case.
In your entry it seems that the analysis you are referencing is Netlist Analysis. In this particular case there is little you can really do to impact performance other than review the product model being presented to Valor NPI and this is the step you have taken as I see that you are already reporting improvement based on that effort. If you believe you have a specific case that we should look into, please contact Mentor support and file a service request. With the service request be sure to provide the product model. There are times that conditions found in the graphical representation of a layer that results in the application performing additional operations that we could possibly improve upon. In order to do so Mentor’s engineering department would require the product model to review.
Definitely, it's not about ERF and settings and it's not happened on most of design data, only on the largest design. I have created mutiple articial testing vehicles to validate my guess for the reason about long runtime, and I nearly approch to the conclusion. When I collect some feedback from other users who
are dealing with big data, I will feedback to Mentor Engineering team.
I'm afraid your model is so huge,and rarely design can exceed it.For the board that will consume 8-12 hours, how much pins does it have ? As i see, the pins also impact the runtime. If i have the right memory, there is a time comsuming formula in the help file, you can try to understand something from it. I don't think i can help so much on it ,sorry for that. Hoping for the share after you get a solution on it.
In additionally , i agree with max in some extent, you may try some parameter in ERF, like searching radials. Some parameter is so time-consuming with low importance,even no use.Wish you good luck.
Does Data optimization techniques are implemented in input design ODB++ or gerbers.
Typically if design contains drawn mode pads and rastor mode plane data for plane layers, this kind of time consumption likely to occur. if data optimization is not implemented before runing netlist or plane layer check try optimizing the data to achieve less time consumption in checks. But make sure do not to do any data optimization before netlist verification, there are chances to create electrical connectivity based issues.