At first I suspected the customer was concerned a small difference in noise that really didn't matter. But when the customer said we had twice the noise of our competition, I had to understand what was going on.
How do you measure noise on an oscilloscope?
The following is a brief primer on how to measure noise on a real-time oscilloscope, and the pitfalls when comparing two oscilloscopes next to each other. Whether the noise is on a Tektronix, Agilent, Rohde, or LeCroy oscilloscope, these standard techniques always apply.
- Assess if the noise difference is quantitative or merely visual.
- When I first heard the claim that a competitor's scope had lower noise, the competitor's scope at the time had a smaller display that was the same resolution as ours. In other words, the pixels were smaller, so the trace appeared thinner. That was an older model, and they have since introduced a new model with a bigger screen. But still, remember that the scope is attempting to take millions of digitized data points and compress them to display in 1000 pixels. Choosing how to visually decimate the data and how to intensity grade the trace is a tricky process. Some methods may smooth out important data while others may over accentuate noise spikes. The point is that visual presentation is subjective to the decisions of those who raster the trace, and can have little bearing on actual noise present. Noise must be measured, not viewed visually.
- Ensure both oscilloscopes are properly self-calibrated at temperature.
- Modern oscilloscopes make use of interleaved digitizers to achieve full sample rate. A Tektronix DPO72004C interleaves 32 digitizers to get to 100GS/s. An Agilent DSO93204-X interleaves 320 digitizers to get to 80GS/s. Both products have a self-calibration that must be run to align the digitizers. Excess interleave error can make noise higher. A quick check on a Tektronix oscilloscope is to glance in the lower right corner. If you see a yellow or red thermometer, you shouldn't trust the measurements. Let the scope warm up for 30 minutes and run the internal self-calibration.
- Make sure both oscilloscopes are set to the same bandwidth.
- This one should be obvious, but many people miss it. A 20GHz oscilloscope has more noise than a 13GHz oscilloscope because it is integrating noise over a wider bandwidth. Noise comes from many sources in the environment, so the higher the bandwidth oscilloscope, the higher the noise. Most modern oscilloscopes have DSP to limit the bandwidth. If you want to compare a 20GHz scope with a 13GHz scope, use the DSP to limit the 20GHz scope to 13GHz.
- Make sure both oscilloscopes are set to the same full scale voltage, NOT the same volts per division.
- As previously explained in my blog post on the volts per division knob, each vendor divides the screen up differerently. Everybody uses an 8-bit digitizer across the screen, but Rohde and Tektronix use 10 divisions, whereas Agilent and Lecroy use 8 divisions. If an Agilent scope is set to 100mV/div, the Tektronix scope must be set to 80mV/div for an equal comparison. The alternative is to divide the measured noise by the full scale voltage to measure noise in percentage. Either way, you cannot compare an absolute noise (measured in microvolts) at a particular V/div setting if the 2 scopes divide the screen up differently. Read the blog post on volts per division for a better explanation.
- Suppose scope X has 3mV of RMS noise at 100mV/div, but 8 divisions. Another scope, Scopy Y, has 3.5mV of RMS noise at 100mV/div, but 10 divisions. Which has more noise, scope X or scope Y? At 100mV/div, Scope Y has "more noise". But scope Y has 10 divisions. So the proper way to say it is that at 100mV/div, Scope X has 8 * 100mV or 800mV full scale. 3mV/800mV = 0.375% noise. At 100mV/div, Scopy Y has 10 * 100mV or 1V full scale. 3.5mV/1V = 0.35% noise. Therefore, Scope Y has less noise at 100mV/div even though its noise value initially appears higher.
- As you can see, to compare noise between two oscilloscopes on equal volts/division setting, you must use percent of full scale, not absolute voltage. To use absolute voltage, full scale must be the same. In the above case, Scope Y would be set to 80mV/div for an accurate comparison (10 divisions * 80mV/div is the same as 8 divisions * 100mV/div).
- Do not measure peak-to-peak noise.
- It is tempting to put the scope into a continuous run mode and measure noise peak-to-peak. However, this value is known as unbounded because it gets larger the longer you run the measurement. If two scopes have equal noise, the one that processes data faster can appear to have higher peak-to-peak noise than one that processes data slower.
- Do not use DC-RMS measurement, use AC-RMS.
- RMS measurement is statistically valid, and does not suffer from the peak-to-peak issue above. RMS should not change based on the observation period. However, most oscilloscopes have a DC-RMS measurement. The problem with DC-RMS is that we are measuring a miniscule voltage, one that is likely fractions of a percent of the full scale voltage. However, scopes have an offset error that is specified. DC-RMS will include DC offset error. While DC offset error is important to measure, it needs to be excluded from a noise measurement. Some scopes have a checkbox to AC-couple the RMS measurement (Agilent) and others have a special AC-RMS measurement (Tektronix). Either way works, but you need to be aware of it.
- If AC-RMS is not available, use the standard deviation of a vertical histogram.
- The best way to measure noise is to do a histogram of the trace itself in the vertical axis. The histogram will ignore the DC offset of the signal, and the standard deviation of the histogram is the RMS noise. On a Tektronix windows-based scope, just draw a box on the screen and select "Vertical Histogram". Right click on the histogram and measure the standard deviation. On a Tektronix DPO4104B, you can make the histogram measurement under the "Measure" menu.
- Compare all settings as some will be better than others.
- Another danger is to allow a sales person to set 2 oscilloscopes to the strong spot on one scope and the weak spot on another. Any two products will naturally have good spots and weaker spots. You must look at the products on the whole range of settings.
- Question what design choices impact the vertical noise displayed on a channel.
- Two oscilloscopes of identical banner specification may have truly different performance. I will address this in a later blog post, but just because two oscilloscopes say "16GHz", it does not mean they have identical response. By industry definition, a scope that is 3dB down at 16GHz is a 16GHz oscilloscope, even if it was 1-2dB down for half of the bandwidth. This would result in a slower risetime, but would also result in lower noise (since the noise bandwidth at higher frequencies is attenuated). Even past the stated bandwidth, some scopes use DSP to create a brickwall filter, while others have a more natural gaussian roll off. So one 16GHz scope may be 10dB down at 17GHz, and another may be 5dB down at 17GHz. These are design choices that can impact noise. Faster roll-off can mean less noise but can also mean a worse pulse response. When you've accounted for factors 1-8 above, then you must question scope risetime, overshoot, roll-off past bandwidth, and other factors such as ENOB (subject for a later post)
- Beware of claims of a company - noise specifications are always typical, and not always at every setting.
- When you see noise claims on a datasheet, bear in mind that there is generally a footnote indicating noise performance is "typical" and not "specified/guaranteed". These words mean different things to different vendors. For some, it means a number that cannot be traced or guaranteed without adding significant calibration cost. A true guaranteed specification must be met within a certain statistical tolerance in all operating conditions, and often must be verified during production testing. Typical can be an upper bound for some vendors (meaning all units should outperform it, but it is too expensive to guarantee), or it could be an average for other vendors (meaning some units are better and some are worse). If you see a typical specification, it is good to ask for real test data, and even to verify it yourself using the methods shown above.
No comments:
Post a Comment