Increase total number of bits to simulate to 10 million above.
For why it's difficul to get very low BER below 10E-14, you may have an interest to read a paper in DesignCon 2011:Predicting BER to low Probabilities: Validation of a New Analysis Methodoloy.
Thanks for you quick reply and solution. I just wondered how 12 millions(12e6) bits simulation generated 1E-12 ?
Thanks for sharing the relavent document on BER.
one more doubt on IBIS_AMI simulation, as i am new to IBIS_AMI.
i do understand that .dll file are for signal processing and .ami files are for parameter passing.
Would like to know about the package models on reciever side i.e. for a differential signal why we use .s2p package(J2, snap shot attached).
IBIS_AMI example.bmp 2.0 MB
The idea was to show the probabilities where the results could be trusted. If one has to inspect lower probabilities on the BER plot, it could be done by performing longer simulation.
The floor level (shown in solid dark blue) is set from considering the total number of simulated bits and the samples are converted into finite number of discrete logarithmic intervals. Normally, the number of decimal orders is estimated as inverse to the simulated bits times 1e-3 and then rounded downwards. For example, with 2e6 bits simulated, this should be 1e-3/9e6 = 0.11e-9 and
then "rounded" to 1e-10.
Thanks Weston for deatiled explanation of how the BER is generated. I find this communities site very useful and quick.
One more clarification on this issue, let us assume if my simulation takes 2 minutes for 1e-10 BER and if I want to do simulation for 1e-15.
Does it takes 1e-10/1e-15* 2minutes or shorter. Basically I need to meet OBSAI RP3 standard for an interface which talks of 10E-15 BER.
For me this looks a long time to simulate. if HyperLynx takes long time what is the workaround solution?
That would be a good approximation of the time for the simulation to run. It's not exact, mostly because of the rounding of the number of bits to simulate.
This is an old post, but I am also running into this issue as well and need some clarification.
If I try to measure 10^-20 on the bench with a real signal, I have to wait a really long time (or have a lot of samples testing in parallel). This is because at 10Gbps I can expect only 3 errors every 100 years.
The reason for these 3 errors, at least within the framework of how we characterize these things, is that we have an extreme outliner in our gaussian random jitter, maybe 10 sigma (just guessing).
On the other hand in a simulation environment, if there is no random jitter, these outliers will not exist. The eye should not look more closed after simulating 10million bits vs. 100million bits, we have just run the same 2^23 pattern a bunch more times through the same channel, and the results are deterministic and bounded. Of course the longer the pattern, and the longer the impulse response of the channel the more bits we need to simulate to get to this deterministic boundary (o.k the impulse response is theoretically infininte, but in a sim environment is is bounded).
Once I specify a random jitter, a simulator should be able to convolute this into the eye without the need for more bits?
Hyperlynx seems to do exactly this if I add jitter to the RX AMI. If I add 0.03UI of RJ, then with only 1million bits simulated I get an eye and bathtub curves characterized nicely down to 10^-20 at the center of the eye.
However, if I add the same RJ to the TX AMI file it does not do this. I have to keep increasing the number of bits simulated to show lower BER. 1million bits sampled shows 10^-10 in center of eye, 10million bits samples shows 10^-12 in the center of the eye. I need 10^15 which would take unacceptable simulation time. I have tried to fake it out by putting my .03UI of jitter on the TX, and a very small .0001UI on the RX. It gives me a 10^-20 at the eye center, but it is still completely open, only the .0001UI got extrapolated to the 10^-20 BER. Its ignoring the jitter on the TX side.
My *problem* is that I need to simulate jitter originating on the TX side. I have RJ/DJ values of my transmitting transceiver and I need to see the eye closure at the input of the receiver excluding received aperture jitter.
So, to make this long story short, why does adding RJ to RX vs TX AMI functions differently? Can I get it to characterize down to 10^-15 no matter where the jitter is applied?
This is a very interesting question. It would be very nice to hear what HyperLynx Experts will say.
In the meantime, I suspect there is a jitter transfer/tolerance issue when you add jitter to Tx, because part of Tx jitter will be eliminated by Rx CDR. So it is probably not a simple convolution in this case. HyperLynx may just use time-domain GetWave() method to determine the effect of Tx jitter. When adding jitter to Rx, it probably add jitter directly to Rx CDR recovered clock, then it is a simple convolution.
I'm pretty sure this is correct.The TX jitter is added into the stimulus that drives the time-domain simulation. The results in the Eye Density Viewer are the output of the RX AMI model. Therefore, some of the jitter in the TX is filtered out by the CDR in the RX. This is why it makes sense to show the BER down to 10^-10. If the jitter is added into the RX, there is more chance for error, so the BER range needs to go lower.
If you want to see the eye at the receiver pin without the effect of the receiver equalization and CDR, you can comment the [Algorithmic Modeling Interface] section in the receiver IBIS model. The low end of the BER range is still determined by the number of bits simulated. Maybe you need to run the simulation over the weekend. This is still a lot faster than running the standard time-domain simulation for this large number of bits.