Nyquist-Shannon sampling theorem

from Wikipedia, the free encyclopedia

The Nyquist-Shannon sampling theorem , also called the Nyquist-Shannon sampling theorem and in more recent literature also called the WKS sampling theorem (for Whittaker, Kotelnikow and Shannon), is a fundamental theorem of communications engineering , signal processing and information theory . Vladimir Kotelnikov formulated the sampling theorem in 1933. The publication in a Soviet conference report has been referenced in the East since the 1950s, but remained largely unknown in the West until the 1980s. Independently of Kotelnikow, Claude Elwood Shannon formulated it in 1948 as the starting point of his theory of maximum channel capacity , i.e. H. the maximum bit rate in a frequency-limited, noisy transmission channel.

The sampling theorem states that a band-limited signal can be exactly reconstructed from a sequence of equidistant sampled values if it was sampled at a frequency greater than that.

historical development

Claude Shannon relied on the ideas of Harry Nyquist on the transfer of finite number sequences by means of trigonometric polynomials and on the theory of cardinal functions by Edmund Taylor Whittaker (1915) and his son John Macnaghten Whittaker (1928). Karl Küpfmüller came to similar results as Nyquist in 1928.

Only the researchers of the Eduard Rhein Foundation have proven Kotelnikov's priority beyond any doubt. For this he received the Eduard Rhein Prize in 1999.

Herbert P. Raabe formulated the sampling theorem independently of Kotelnikow in 1939.

Basics

An example of increasing the signal frequency over half the sampling frequency. The sampling frequency is the same in all partial images. However, the highest frequency contained in the signal increases towards the bottom. The dashed lines are possible signals that would have the same measurement points in the present scan.

The sampling theorem formulated by Shannon states that a function that does not contain any frequencies higher than is uniquely determined by any series of function values ​​in the distance . A sufficient condition for this is that the function can be integrated into the square.

The function profile can then be reconstructed by replacing each sample value with a sinc function with the same amplitude and then adding it over all k .

In signal processing , this corresponds to sampling with one sampling rate . The signal representation obtained in this way is called pulse amplitude modulation . For the reconstruction, this signal is filtered through an ideal low pass with a cutoff frequency .

For non- baseband signals, i.e. H. those with a minimum frequency f min greater than 0 Hz, the sampling theorem applies in a similar form, since the bandpass signal appears in the baseband after sampling through a suitable selection of the sampling frequency. The sampling frequency then only has to be greater than twice the bandwidth (see also subsampling ). In the reconstruction, an ideal band pass is used here instead of an ideal low pass.

When subsampling a bandpass signal, the following applies:

In practice, a signal is usually low-pass filtered before sampling so that the (basic) bandwidth of the sampling rate is sufficient. The sampling theorem also applies analogously to images and videos, whereby the sampling frequency can then be determined in lines (or pixels) per unit of length.

Intuition

As described in the article Sampling (Signal Processing) , one can model the sampling of a signal by multiplying it with a Dirac comb , which gives the sampled signal . After reversing the convolution theorem , the Fourier transform of the sampled signal results from:

where is periodic with the period and is the interval between 2 sampling times. If the sampling frequency is below the frequency (for baseband signals), lower and higher frequency components are superimposed in the frequency space and can then no longer be separated.

However, the duration of a technical scanning pulse is not arbitrarily short. Therefore, in practice, the frequency spectrum is a square pulse train instead of a Dirac pulse train. (The Dirac impact sequence is a function that is infinitely large in only one place (t = 0) and vanishes in all other places.)

Explanation of the terms

Band-limited signal

A signal x with a limited bandwidth and a maximum frequency F : = f max is a function for which the Fourier transform exists and this Fourier transform is zero outside the interval . Then, conversely, the band-limited signal can be represented by the inverse Fourier transform of the frequency density:

.

“Good”, permissible functions for the frequency density X are, for example, piecewise continuous functions for which both of the one-sided limit values ​​exist at each point. More generally, functions from the function space are permissible.

If x is real , then the following applies . If X is represented in polar coordinates, we get x by means of an integral with a real integrand,

.

In the Cartesian representation the result is analogous

.

Sampling at twice the frequency

Sampling at twice the frequency here means that function values ​​are taken at regular intervals, with a single interval being, i. That is, the sequence of numbers is constructed from x . According to the Fourier representation, these values ​​result from the frequency density as

.

But these are precisely the coefficients in the Fourier series expansion

Thus, the frequency density and thus the signal is already completely determined by the values ​​of the sampling sequence.

Reconstruct without loss of information

Reconstructing without loss of information means that the Lagrange interpolation , extended to the case with an infinite number of regularly arranged support points, again yields the output signal

.

Please note that although you can work excellently with these formulas in mathematics, they cannot be implemented in real scanning systems. A summation over an infinite range would be necessary to determine each signal value. In addition, an infinite number of clocks would have to be waited for before the summation can be completed. Because this is not possible, errors inevitably arise in practice.

The function , the sinus cardinalis (sinc), is the ideal interpolation kernel for integer support points; it is sinc (0) = 1 and sinc ( n ) = 0 for every further integer n . The interpolating series is also referred to as the cardinal series, according to Whittaker's notation, the prefix cardinal referring to the prominent role of the “least fluctuation” among all interpolating function series . The sinc function has, apart from one factor, the square function as a Fourier transform, this has the value 1 on the interval , otherwise the value zero. It is therefore band-limited with the highest frequency 1/2 .

The development as a cardinal series now results quite naturally by inserting the Fourier series of the frequency density into the inverse Fourier transform,

Signal in band pass position

A real signal in the bandpass position must have a Fourier transform that does not vanish only for frequencies from the interval in order to allow sampling by function values . Then F is the one-sided bandwidth. This can be generalized to frequency bands of any size, but then the sampling is not to be defined by function values, but by scalar products. One example of this is frequency division multiplexing , see also OFDM .

Note: No finite signal, i.e. that is, no function with a finite carrier meets the requirements of a band-constrained function. Periodic signals, such as pure sinusoidal oscillations, do not fall within the scope of this theorem either; just as little signals with discontinuities (kinks or jumps in the course). It is therefore to be regarded as an ideal statement in an ideal situation. The closest to the ideal are modulated vibrations, such as music or voice recordings, which are to be digitized for further processing. For other practical purposes, e.g. B. digital image processing, variants of the sampling theorem with less stringent requirements must be found, for which this theorem is then a guideline.

Math background

For mathematical principles see: Lebesgue integral , Lebesgue space , Fourier transformation .

By scaling the time dependency, each band-limited signal x (t) can be scaled to the frequency range [-½; ½] , or [-π; π] as the angular frequency range, can be reduced. The frequency density g (f) has to be a function of limited variation , like for example piecewise continuous functions. Then x (t) is a continuous, arbitrarily often differentiable, absolute and square integrable function , and has a Fourier transform with a carrier .

Under these conditions, the function value x (t) at any point t is determined by the function values x (n) at all integer points t = n , the following applies:

.

This equation contains two nontrivial statements: 1) the infinite series converges, and 2) the limit value is always identical to the function value x (t) .

The identity of a band-restricted function with its cardinal series given above (according to Whittaker) results from Poisson's sum formula , it is true

,

from which, according to the formula of the inverse Fourier transform

By skillfully using the general sampling formula , one can also get generalized cardinal series expansions, for example

,

d. That is, the sampling rate is halved, for this two values ​​are taken at each sampling point, the function value and the first derivative. It is developed locally linearly, so to speak, and the developments are “glued together” by breaking down the one . Formulas with higher-order derivatives are not so easy to interpret.

If f is band-limited to circular frequencies from the interval and if there are real numbers that are different in pairs , then the following applies

The first factor in the summand is the core function of a decomposition of the one, the second factor is an interpolation polynomial , which looks similar to the Lagrange interpolation. If the a k is allowed to run to 0 simultaneously and is replaced by the Taylor polynomial of degree N -1 or greater, then any complex differential cardinal series results.

Low pass to prevent aliasing

If the sampling frequency is too small, ambiguities will occur in the digitized signal. These non-linear distortions are also known as the aliasing effect. Images may appear out of phase with shadows or new structures that are not in the original.

The lower limit of the sampling frequency for an analog signal of the bandwidth

is also called the Nyquist rate . The highest frequency to be transmitted must therefore be less than half the sampling frequency, otherwise aliasing errors will occur. For this reason, higher frequencies are filtered out of the analog signal with a low pass. The aliasing errors are alias signals ( interference signals , pseudo signals ) which become noticeable as disruptive frequency components during the reconstruction. For example, if a sinusoidal signal with a frequency of 1600 Hz is digitized with a sampling frequency of 2000 Hz, a 400 Hz alias signal is obtained (2000–1600 Hz). On the other hand, if the sampling frequency exceeds 3200 Hz, there is no alias signal. A sampling frequency of 3300 Hz, for example, leads to a difference signal of 1700 Hz (3300–1600 Hz). However, this is greater than half the sampling rate and is therefore removed during the reconstruction by a low pass.

In practice there is no such thing as an ideal low pass . It always has a certain transition area between practically no attenuation in the pass band and practically complete attenuation in the stop band. Therefore, in practice, a modified formula is used to determine the sampling frequency:

Example:

A signal is stored on a CD that is generated by digitizing an analog audio signal with frequencies of up to 20 kHz. The frequency at which the analog audio signal is sampled is 44.1 kHz.

The factor used depends on the low-pass filter used and the required attenuation of the alias signals. Other common factors are 2.4 ( DAT , DVD ) and 2.56 ( FFT analyzers)

Oversampling

If you choose a higher sampling frequency, you will not get any additional information. However, the effort involved in processing, storing and transmitting increases. Nevertheless oversampling (English oversampling ) applied in practice. This is because if the useful bandwidth B is very close to half the sampling frequency, high demands are made on the slope of the low-pass filter . A sufficiently high attenuation in the stop band of a low-pass system can be achieved more easily with a higher sampling frequency than with a high-quality filter. The band limitation can then be shifted to a high order digital filter. In practice, an oversampling factor M  = 2 or M  = 4 is often chosen. Thus, one needs less steep analog filters before sampling. After the first sampling, a digital filter is then used before the subsequent sampling rate reduction, so that the sampling frequency is subsequently lowered. This digital filter is also known as a decimation filter . It can be implemented, for example, in the form of a cascaded integrator comb filter .

Expressed mathematically, an ideal low-pass filter has a square-wave function as the transfer function . This transfer function cuts off the spectrum in the frequency domain perfectly and the filtered signal can be perfectly reconstructed from the sampling points. However, an ideal low-pass filter cannot be implemented in practice because it is not causal and infinitely long.

This is why analog low-pass filters are used, which have a continuous, trapezoid-like transfer function and whose edges increase or decrease with a continuous, finite slope. These filters can be implemented in the form of Butterworth filters, for example . After the sampling, the digital smoothing and clocking down to the useful bandwidth takes place. The slope has an influence on the quality of the reconstructed signal.

Subsampling

The condition f sample  > 2 ·  f max from the sampling theorem is a simplified representation, which is, however, very common and useful. Strictly speaking, instead of f max , there must be the bandwidth that is defined by the range between the lowest and highest frequency occurring in the signal. The bandwidth is only identical to f max in baseband signals; baseband signals are signals with low-frequency components in the vicinity of 0 Hz.

This realization led to a concept called bandpass undersampling (or sub-nyquist sampling ), which is used, for example, in digital radio technology. Suppose you want to receive all radio stations that broadcast between 88 and 108 MHz. If the sampling theorem is interpreted as described above, the sampling frequency should be above 216 MHz. In fact, the technique of undersampling only requires a sampling frequency of a little more than 40 MHz. The prerequisite for this is that all frequencies outside the frequency range of 88 to 108 MHz are removed from the signal using a bandpass filter before it is sampled . Sampling takes place at 44 MHz, for example, without the relevant range being converted by an analog mixer - the result is, so to speak, an alias signal and corresponds to the signal that would result from sampling a range converted to 0-22 MHz by a mixer.

In order to be able to at least approximately realize the necessary punctiform sampling in practice, the sample-and-hold circuit must be designed in such a way that the readout interval becomes as narrow as would be necessary for a sampling frequency of 220 MHz or more. This can be compared with a sampling at 220 MHz, of which only every fifth value is used, while the four intervening samples are discarded.

See also

Commons : Nyquist Shannon theorem  - collection of images, videos and audio files

literature

  • Harry Nyquist : Certain Topics in Telegraph Transmission Theory. In: Transactions of the American Institute of Electrical Engineers. Vol. 47, 1928, ISSN  0096-3860 , pp. 617-644 (reprinted in: Proceedings of the IEEE. Vol. 90, No. 2, 2002, ISSN  0018-9219 , pp. 617-644).
  • JR Higgins: Five short stories about the cardinal series. In: Bulletin of the American Mathematical Society. NS Vol. 12, No. 1, 1985, pp. 45-89.
  • Michael Unser: Sampling - 50 Years after Shannon. In: Proceedings of the IEEE. Vol. 88, No. 4, 2000, pp. 569-587, ( online ).
  • Wolfgang Wunderlich: Digital television HDTV, HDV, AVCHD for beginners and those switching . Auberge-tv Verlag, Hohen Neuendorf 2007, ISBN 978-3-00-023484-2 .

Web links

Individual evidence

  1. ^ Claude Elwood Shannon: Communication in the Presence of Noise (PDF; 301 kB). In: Proc. IRISHMAN. Vol. 37, No. 1, 1949 (reprinted in: Proc. IEEE. Vol. 86, No. 2, 1998)
  2. Algorithmic Information Theory: Mathematics of Digital Information Processing , Peter Seibt, Springer, 2006, ISBN 3-540-33219-7 , p. 216 ( limited preview in Google book search).
  3. ^ JM Whittaker: The "Fourier" Theory of the Cardinal Function . In: Proceedings of the Edinburgh Mathematical Society (Series 2) . tape 1 , no. 03 , 1928, p. 169-176 , doi : 10.1017 / S0013091500013511 .
  4. K. Küpfmüller: About the dynamics of the automatic gain control. In: Electrical communications engineering. Vol. 5, No. 11, 1928, pp. 459-467.
  5. ^ Hans Dieter Lüke: The Origins of the Sampling Theorem , IEEE Communications Magazine, pp. 106-108, April 1999. Online version (PDF; 53 kB).
  6. ^ Thomas Görne: Sound engineering. Hanser Verlag, 2008, ISBN 3-446-41591-2 , p. 153 ( limited preview in Google book search).
  7. cf. Michael Unser: Sampling - 50 Years after Shannon. In: Proceedings of the IEEE. Vol. 88, No. 4, 2000, pp. 569-587, ( online ).
  8. see JR Higgins: Five short stories about the cardinal series. In: Bulletin of the American Mathematical Society. NS Vol. 12, No. 1, 1985, pp. 45-89.