Frequency Response Testing
Intermediate NTSC Video Testing
Frequency response testing determines how flat the system actually is. For optimum performance, the frequency response of the entire video system should be as flat as possible. If the system is not flat, signal amplitudes will become distorted as a function of frequency. For example, the system may attenuate higher frequency components more than lower frequency components or vice versa.
For test purposes, television signals are divided roughly at 1 MHz into low- and high- frequency ranges. Even though some of the most important signal information falls below 1 MHz -- namely synchronizing pulses, general brightness level, and some active video -- low-frequency testing is often neglected. This can be a troublesome and costly oversight.
You should routinely check system response in the low- frequency range. Low-frequency testing is further divided into line time and field time distortions. A line bar, such as the one found in the pulse and bar signal (Figure 5-6) is used for checking for line-time distortions. Field-time distortion testing is done with either a field square wave or a windowed bar signal.
Figure 5-6. The line bar in this pulse and bar signal is used to check for low-frequency distortions at the line rate.
The pulse and bar signal provided by the TSG 170A is a windowed signal, consisting of 130 lines of the pulse and bar signal in the center of the field. On a picture monitor, this creates the window effect shown in Figure 5-7. If the video path can faithfully convey this signal, you can assume that the system's low-frequency response is satisfactory.
Figure 5-7. The windowed pulse and bar signal is used to check for low-frequency distortions at the field rate.
To check field-time distortion, the waveform monitor should be set for a two-field sweep in the flat-response mode. Also, the dc restorer should be in either slow-restore mode or turned off.
If the video path's low-frequency response is correct, you should see perfectly flat horizontal lines. Any tilt in these lines, such as shown in Figure 5-8, represents a field-rate impairment. Such an impairment would cause brightness variations between the top and bottom of the picture. Provided there is no insertion gain error, the field-time distortion can be measured as the percent variation from the normal flat level, excluding the first and last 0.2 milliseconds of the bar.
Figure 5-8. Waveform monitor display of a windowed pulse and bar signal showing about 6% field-time distortion.
Similar observations are made with the line bar signal for line-rate response problems. Any tilt in the line bar would produce brightness variations between the left and right sides of the picture. With active video, line-time distortion produces horizontal "streaking" -- usually seen as light and/or dark streaks extending to the right of horizontal transitions in the picture. In quantifying this measurement, exclude the first and last microsecond of the bar, since distortions near the transition occur at frequencies above the line rate.
If your equipment exhibits line time or field time distortions beyond the limits specified by the manufacturer, it must be serviced by a qualified technician. There are no external adjustments for either low frequency response or high frequency response errors.
Response to higher frequencies, those above 1 MHz, is often checked with a multiburst test signal. Response problems above 1 MHz can cause impairments in either or both the chrominance and monochrome detail of pictures.
The multiburst signal tests response by applying packets of discrete frequencies ranging from about 500 kHz to 4.2 MHz. The Tektronix TSG 170A NTSC Television Generator provides the multiburst signal shown in Figure 5-9.
Figure 5-9. To display this multiburst signal, the waveform monitor is set for one-line sweep and flat response. This signal shows a flat response for high frequencies (500 kHz to 4.2 MHz).
The multiburst signal is composed of six frequency packets. The second packet from the right has a frequency of 3.58 MHz and is used to check color subcarrier response characteristics. Notice also that the multiburst signal starts with a low-frequency signal (bar, sine wave, or, as is the case in Figure 5-9, a square wave). This low-frequency signal is used as an amplitude reference in measuring the relative amplitudes of the other packets.
Be aware there are many different configurations of multiburst signals -- this test signal has a long history and meets many differing needs. When testing a VTR, you should use a reduced amplitude (60 IRE vs. 100 IRE) multiburst signal. This is to avoid intermodulation between the multiburst frequencies and the FM recording system in the VTR. Such intermodulation can cause signal distortions even when there actually is nothing wrong with the VTR. Also, when evaluating multiburst amplitudes, you need to take into consideration the VTR's specified bandwidth.
This can be significantly less than 4.2 MHz, which means that you should expect to see attenuation of the high-frequency packets.
For multiburst measurements, the waveform monitor should be set to a one- or two-line sweep and flat response. For a perfect system response, the multiburst display would show all packets as having the same peak-to-peak amplitude. Any significant amplitude variations in packets indicate a frequency response variation -- it is an error only if it is outside the equipment's specifications. Some video equipment, such as distribution amplifiers or switchers, may pass the multiburst packets with very little frequency response distortion.
The response at a specific frequency can be expressed as a percent of nominal value or in decibels. Either method of expression is based on peak-to-peak amplitude measurements.
Again, the absolute frequency response is often not the issue of greatest concern. Instead, the response relative to a particular specification or to earlier measurements is used to indicate equipment performance. A difference between past and present response measurements is a sign of equipment performance changing and may indicate a need for service.
High-frequency rolloff, such as shown in Figure 5-10, is probably the most common type of response distortion. When this occurs, luminance fine detail is degraded. Many VTRs will show a much greater rolloff and still be within specifications.
Figure 5-10. High-frequency rolloff is apparent in this multiburst signal display. The maximum error occurs where the signal is about 50 out of 60 IRE, which is -1.6 dB.
High-frequency peaking is another type of distortion. This is shown in Figure 5-11. It is usually caused by incorrect equalizer adjustment or misadjustment of some other compensating device. This problem causes noisy pictures -- you see sparkles and overly emphasized edges.
Figure 5-11. This multiburst signal suffers from high-frequency peaking. The maximum error occurs where the signal is 80 IRE instead of the nominal 60 IRE (1.3 dB increase).
Center-frequency dipping or peaking can also occur. When this affects a broad range of frequencies, it can be detected with the multiburst signal. On the other hand, peaking or dipping may only affect a narrow range of frequencies. When this is the case and the affected frequencies occur between multiburst packets, the distortion can go undetected by this test technique.
To catch such narrowband response problems, a sweep signal or other continuous-band response measurement technique is needed. Fortunately, narrowband peaking or dipping does not occur often. And when it does occur, it can be detected as noticeable ringing on sync pulses or other sharp signal transitions. Such ringing indicates the need for more thorough response evaluation using frequency sweep, multipulse, or (sin x)/x pulse techniques. To learn more about these techniques, refer to Tektronix publications Using the Multipulse Waveform to Measure Group Delay and Amplitude Errors (20W-7076), and Frequency-Response Testing Using a (Sin x)/x Signal and the VM700A Video Measurement Set (20W-7064).
When a signal passes through a video system, there should be no change in the relative amplitudes of chrominance and luminance. In other words, the ratio of the chrominance and luminance gains, which is sometimes referred to as relative chroma level, should remain the same. If there are relative chroma level errors, the pictures color saturation will be incorrect.
Relative chroma level is checked by measuring chrominance-to-luminance gain. Measured gain-ratio errors can be expressed in IRE, percent, or dB. When chrominance components are peaked relative to luminance, the error is a positive number. When chrominance is attenuated, the error is negative.
Measurements are made using the special chrominance pulse shown in Figure 5-12. This pulse is the modulated 12.5T sine-squared pulse which is included in many combination test signals. For example, it's included in both the pulse and bar and NTC 7 Composite signals provided by the TSG 170A NTSC Television Generator. The pulse and bar signal also includes a line bar and a 2T pulse.
Figure 5-12. A 12.5T chrominance pulse, shown in the center of this display, is used to evaluate chrominance-to-luminance gain and delay errors. For zero gain and delay errors, the negative peaks of this modulated sine-squared pulse should line up on the pulse baseline.
The chrominance pulse consists of a low-frequency, sine-squared luminance component that's been added to a chrominance packet having a sine-squared modulation envelope. These combined pulse components have characteristics that allow gain and phase errors to be seen as distortions of the pulse baseline. In the case of Figure 5-12, there are no errors and the baseline is flat.
Figure 5-13 shows what happens when there's a relative chroma level distortion. The upward bowing of the baseline (the negative waveform peaks) indicates that chrominance is reduced relative to luminance. If chrominance were increased relative to luminance, the baseline would bow downward.
Figure 5-13. A single, symmetric peak in the chrominance pulse baseline indicates a chrominance-to-luminance gain error. This example shows an error of about 20%.
Chrominance-to-luminance gain error can be measured directly on the waveform monitor graticule. This is done by comparing the peak-to-peak amplitude of the chrominance component of the 12.5T pulse to the normalized white level reference bar. In the case of Figure 5-13, the chrominance amplitude is 80 IRE, indicating a 20% error. This measurement approach is valid only if there is no low-frequency amplitude distortion and there is negligible chrominance-to-luminance delay.
A chrominance-to-luminance gain error is easy to correct if the equipment under test has an external chroma gain control. If it does, simply adjust the chroma gain control for a flat chrominance pulse baseline. If it does not, the equipment must be serviced by a qualified technician.
Chrominance-to-luminance delay, on the other hand, is a more common error. Its presence is indicated when the chrominance pulse baseline has a sinusoidal distortion such as shown in Figure 5-14. When there is delay error only, the sinusoidal lobes are symmetric and the pulse amplitude should match the white level reference bar amplitude (100 IRE). This is the case shown in Figure 5-14. Asymmetrical lobes along with peaking or attenuation of the pulse amplitude indicate the presence of combined gain and delay errors.
Figure 5-14. A sinusoidal distortion of the chrominance pulse baseline indicates that chrominance is either advanced or delayed relative to luminance.
Since there are no user adjustments for chrominance-to- luminance delay on composite NTSC equipment, correcting this problem requires a trip to a local service center.
Measuring chrominance-to- luminance delay is beyond the scope of intermediate video system testing. However, you can learn about these more advanced measurements by referring to Tektronix publications Television Measurements For NTSC Systems (063-0566-00) or Using the Multipulse Waveform to Measure Group Delay and Amplitude Errors (20W-7076).
Thus far, the focus has been on distortions having equal effects for signals of differing amplitudes. These are linear distortions because the amount of signal distortion varies linearly with signal amplitude. In other words, a linear distortion causes the same percent error on a small signal as on a large signal.
Nonlinear distortions, by contrast, are amplitude dependent. They may be affected by changes in Average Picture Level (APL) as well as instantaneous signal level. In other words, nonlinear distortion causes different percent errors depending on signal amplitude. An overdriven amplifier, for example, causes nonlinear distortion when it compresses or clips signal amplitude peaks. Since APL changes should be taken into account, more definitive measurement results can often be obtained by using a generator, such as the Tektronix TSG 170A, that provides test signals at several different APLs. When using such a generator, run the test with at least two APLs, one low and one high, and report the worst result.
The three remaining measurements to be discussed in this section fall into the category of determining the degree of signal linearity (or nonlinearity). These are the tests for luminance nonlinearity, differential gain, and differential phase. These can be conducted with test signals provided by the same signal generator used in the previous tests, the Tektronix TSG 170A NTSC Television Generator.