# Nyquist-Shannon sampling theorem

The Nyquist-Shannon sampling theorem , also called the Nyquist-Shannon sampling theorem and in more recent literature also called the WKS sampling theorem (for Whittaker, Kotelnikow and Shannon), is a fundamental theorem of communications engineering , signal processing and information theory . Vladimir Kotelnikov formulated the sampling theorem in 1933. The publication in a Soviet conference report has been referenced in the East since the 1950s, but remained largely unknown in the West until the 1980s. Independently of Kotelnikow, Claude Elwood Shannon formulated it in 1948 as the starting point of his theory of maximum channel capacity , i.e. H. the maximum bit rate in a frequency-limited, noisy transmission channel.

The sampling theorem states that a band-limited signal can be exactly reconstructed from a sequence of equidistant sampled values if it was sampled at a frequency greater than that. ${\ displaystyle f _ {\ text {max}}}$ ${\ displaystyle 2 \ cdot f _ {\ text {max}}}$ ## historical development

Claude Shannon relied on the ideas of Harry Nyquist on the transfer of finite number sequences by means of trigonometric polynomials and on the theory of cardinal functions by Edmund Taylor Whittaker (1915) and his son John Macnaghten Whittaker (1928). Karl Küpfmüller came to similar results as Nyquist in 1928.

Only the researchers of the Eduard Rhein Foundation have proven Kotelnikov's priority beyond any doubt. For this he received the Eduard Rhein Prize in 1999.

Herbert P. Raabe formulated the sampling theorem independently of Kotelnikow in 1939.

## Basics An example of increasing the signal frequency over half the sampling frequency. The sampling frequency is the same in all partial images. However, the highest frequency contained in the signal increases towards the bottom. The dashed lines are possible signals that would have the same measurement points in the present scan.

The sampling theorem formulated by Shannon states that a function that does not contain any frequencies higher than is uniquely determined by any series of function values ​​in the distance . A sufficient condition for this is that the function can be integrated into the square. ${\ displaystyle \ textstyle f _ {\ text {max}}}$ ${\ displaystyle \ tau <{\ tfrac {1} {2f _ {\ text {max}}}}}$ The function profile can then be reconstructed by replacing each sample value with a sinc function with the same amplitude and then adding it over all k . ${\ displaystyle {\ hat {x}} (k \ tau)}$ ${\ displaystyle \ operatorname {si} (2 \ pi f _ {\ text {max}} (tk \ tau)) \ cdot {\ hat {x}} (k \ tau)}$ In signal processing , this corresponds to sampling with one sampling rate . The signal representation obtained in this way is called pulse amplitude modulation . For the reconstruction, this signal is filtered through an ideal low pass with a cutoff frequency . ${\ displaystyle f _ {\ text {scan}}> 2 \, f _ {\ text {max}}}$ ${\ displaystyle f _ {\ text {max}}}$ For non- baseband signals, i.e. H. those with a minimum frequency f min greater than 0 Hz, the sampling theorem applies in a similar form, since the bandpass signal appears in the baseband after sampling through a suitable selection of the sampling frequency. The sampling frequency then only has to be greater than twice the bandwidth (see also subsampling ). In the reconstruction, an ideal band pass is used here instead of an ideal low pass.

When subsampling a bandpass signal, the following applies:

${\ displaystyle f _ {\ mathrm {scan}}> 2 \, (\, f _ {\ mathrm {max}} -f _ {\ mathrm {min}})}$ In practice, a signal is usually low-pass filtered before sampling so that the (basic) bandwidth of the sampling rate is sufficient. The sampling theorem also applies analogously to images and videos, whereby the sampling frequency can then be determined in lines (or pixels) per unit of length.

### Intuition

As described in the article Sampling (Signal Processing) , one can model the sampling of a signal by multiplying it with a Dirac comb , which gives the sampled signal . After reversing the convolution theorem , the Fourier transform of the sampled signal results from: ${\ displaystyle s}$ ${\ displaystyle k}$ ${\ displaystyle s_ {a}}$ ${\ displaystyle {\ mathcal {FT}} (s_ {a}) (\ omega) = \ left ({\ mathcal {FT}} (s) * {\ mathcal {FT}} (k) \ right) (\ omega),}$ where is periodic with the period and is the interval between 2 sampling times. If the sampling frequency is below the frequency (for baseband signals), lower and higher frequency components are superimposed in the frequency space and can then no longer be separated. ${\ displaystyle {\ mathcal {FT}} (s_ {a}) (\ omega) = {\ mathcal {FT}} (s_ {a}) \ left (\ omega + {\ frac {2 \ pi} {\ Delta t}} \ right)}$ ${\ displaystyle {\ frac {2 \ pi} {\ Delta t}}}$ ${\ displaystyle \ Delta t}$ ${\ displaystyle {\ frac {1} {\ Delta t}}}$ ${\ displaystyle 2f _ {\ text {max}}}$ However, the duration of a technical scanning pulse is not arbitrarily short. Therefore, in practice, the frequency spectrum is a square pulse train instead of a Dirac pulse train. (The Dirac impact sequence is a function that is infinitely large in only one place (t = 0) and vanishes in all other places.)

## Explanation of the terms

### Band-limited signal

A signal x with a limited bandwidth and a maximum frequency F : = f max is a function for which the Fourier transform exists and this Fourier transform is zero outside the interval . Then, conversely, the band-limited signal can be represented by the inverse Fourier transform of the frequency density: ${\ displaystyle X = {\ mathcal {F}} (x) \ colon \ mathbb {R} \ to \ mathbb {C}}$ ${\ displaystyle [-2 \ pi F, 2 \ pi F]}$ ${\ displaystyle x (t) = {\ frac {1} {\ sqrt {2 \ pi}}} \ int _ {- 2 \ pi F} ^ {2 \ pi F} X (\ omega) e ^ {\ mathrm {i} \ omega \, t} \, d \ omega}$ .

“Good”, permissible functions for the frequency density X are, for example, piecewise continuous functions for which both of the one-sided limit values ​​exist at each point. More generally, functions from the function space are permissible. ${\ displaystyle L ^ {2} ([- 2 \ pi F, 2 \ pi F], \ mathbb {C})}$ If x is real , then the following applies . If X is represented in polar coordinates, we get x by means of an integral with a real integrand, ${\ displaystyle X (- \ omega) = {\ overline {X (\ omega)}}}$ ${\ displaystyle X (\ omega) = | X (\ omega) | e ^ {i \, \ phi (\ omega)}}$ ${\ displaystyle x (t) = {\ sqrt {\ frac {2} {\ pi}}} \ int _ {0} ^ {2 \ pi F} | X (\ omega) | \, \ cos (\ omega \, t + \ phi (\ omega)) \, d \ omega}$ .

In the Cartesian representation the result is analogous ${\ displaystyle X (\ omega) = A (\ omega) + iB (\ omega)}$ ${\ displaystyle x (t) = {\ sqrt {\ frac {2} {\ pi}}} \ int _ {0} ^ {2 \ pi F} \ left (A (\ omega) \, \ cos (\ omega \, t) -B (\ omega) \, \ sin (\ omega \, t) \ right) \, d \ omega}$ .

### Sampling at twice the frequency

Sampling at twice the frequency here means that function values ​​are taken at regular intervals, with a single interval being, i. That is, the sequence of numbers is constructed from x . According to the Fourier representation, these values ​​result from the frequency density as ${\ displaystyle \ Delta t = 1 / (2F)}$ ${\ displaystyle x [k]: = x (k \ Delta t)}$ ${\ displaystyle x (k \ Delta t) = {\ frac {1} {\ sqrt {2 \ pi}}} \ int _ {- 2 \ pi F} ^ {2 \ pi F} X (\ omega) e ^ {\ mathrm {i} {\ frac {k \ omega} {2F}}} \, d \ omega}$ .

But these are precisely the coefficients in the Fourier series expansion

${\ displaystyle X (\ omega) = {\ frac {1} {{\ sqrt {2 \ pi}} \, 2F}} \ sum _ {k = - \ infty} ^ {\ infty} x (k \ Delta t) e ^ {- \ mathrm {i} {\ frac {k \ omega} {2F}}}.}$ Thus, the frequency density and thus the signal is already completely determined by the values ​​of the sampling sequence.

### Reconstruct without loss of information

Reconstructing without loss of information means that the Lagrange interpolation , extended to the case with an infinite number of regularly arranged support points, again yields the output signal

${\ displaystyle x (t) = y (t): = \ sum _ {k = - \ infty} ^ {\ infty} x_ {k} \ prod _ {j \ in \ mathbb {Z}, \; j \ neq k} {\ frac {tj \ Delta t} {k \ Delta tj \ Delta t}} = \ sum _ {k = - \ infty} ^ {\ infty} x (k \ Delta t) \ \ operatorname {sinc } (t / \ Delta tk)}$ .

Please note that although you can work excellently with these formulas in mathematics, they cannot be implemented in real scanning systems. A summation over an infinite range would be necessary to determine each signal value. In addition, an infinite number of clocks would have to be waited for before the summation can be completed. Because this is not possible, errors inevitably arise in practice.

The function , the sinus cardinalis (sinc), is the ideal interpolation kernel for integer support points; it is sinc (0) = 1 and sinc ( n ) = 0 for every further integer n . The interpolating series is also referred to as the cardinal series, according to Whittaker's notation, the prefix cardinal referring to the prominent role of the “least fluctuation” among all interpolating function series . The sinc function has, apart from one factor, the square function as a Fourier transform, this has the value 1 on the interval , otherwise the value zero. It is therefore band-limited with the highest frequency 1/2 . ${\ displaystyle \ operatorname {sinc} (x) = {\ frac {\ sin (\ pi x)} {\ pi x}}}$ ${\ displaystyle \ operatorname {rect} \ left ({\ frac {x} {2 \ pi}} \ right)}$ ${\ displaystyle [- \ pi; \ pi]}$ The development as a cardinal series now results quite naturally by inserting the Fourier series of the frequency density into the inverse Fourier transform,

${\ displaystyle x (t) = {\ frac {1} {4 \ pi F}} \, \ sum _ {k = - \ infty} ^ {\ infty} x (k \ Delta t) \ int _ {- 2 \ pi F} ^ {2 \ pi F} e ^ {- \ mathrm {i} {\ frac {k \ omega} {2F}} + i \ omega t} \, d \ omega = \ sum _ {k = - \ infty} ^ {\ infty} x (k \ Delta t) \ operatorname {sinc} (2Ft-k).}$ ### Signal in band pass position

A real signal in the bandpass position must have a Fourier transform that does not vanish only for frequencies from the interval in order to allow sampling by function values . Then F is the one-sided bandwidth. This can be generalized to frequency bands of any size, but then the sampling is not to be defined by function values, but by scalar products. One example of this is frequency division multiplexing , see also OFDM . ${\ displaystyle [2 \ pi nF, 2 \ pi (n + 1) F]}$ Note: No finite signal, i.e. that is, no function with a finite carrier meets the requirements of a band-constrained function. Periodic signals, such as pure sinusoidal oscillations, do not fall within the scope of this theorem either; just as little signals with discontinuities (kinks or jumps in the course). It is therefore to be regarded as an ideal statement in an ideal situation. The closest to the ideal are modulated vibrations, such as music or voice recordings, which are to be digitized for further processing. For other practical purposes, e.g. B. digital image processing, variants of the sampling theorem with less stringent requirements must be found, for which this theorem is then a guideline.

## Math background

For mathematical principles see: Lebesgue integral , Lebesgue space , Fourier transformation .

By scaling the time dependency, each band-limited signal x (t) can be scaled to the frequency range [-½; ½] , or [-π; π] as the angular frequency range, can be reduced. The frequency density g (f) has to be a function of limited variation , like for example piecewise continuous functions. Then x (t) is a continuous, arbitrarily often differentiable, absolute and square integrable function , and has a Fourier transform with a carrier . ${\ displaystyle x \ in L ^ {2} (\ mathbb {R}) \ cap L ^ {1} (\ mathbb {R}) \ cap C ^ {\ infty} (\ mathbb {R})}$ ${\ displaystyle X = {\ hat {x}} \ in L ^ {2} (\ mathbb {R})}$ ${\ displaystyle \ operatorname {supp} \, {\ hat {x}} \ subset [- \ pi, \ pi]}$ Under these conditions, the function value x (t) at any point t is determined by the function values x (n) at all integer points t = n , the following applies:

${\ displaystyle x (t) = \ sum _ {n = - \ infty} ^ {\ infty} x (n) \ cdot {\ frac {\ sin (\ pi (tn))} {\ pi (tn)} } = {\ frac {\ sin (\ pi t)} {\ pi}} \ sum _ {n = - \ infty} ^ {\ infty} {\ frac {(-1) ^ {n} x (n) } {tn}}}$ .

This equation contains two nontrivial statements: 1) the infinite series converges, and 2) the limit value is always identical to the function value x (t) .

 The identity of a band-restricted function with its cardinal series given above (according to Whittaker) results from Poisson's sum formula , it is true ${\ displaystyle {\ hat {x}} (\ omega) = \ sum _ {k \ in \ mathbb {Z}} {\ hat {x}} (2 \ pi k + \ omega) = {\ frac {1} {\ sqrt {2 \ pi}}} \ sum _ {n \ in \ mathbb {Z}} x (n) e ^ {- \ mathrm {i} \ omega n}}$ , from which, according to the formula of the inverse Fourier transform ${\ displaystyle x (t) = {\ frac {1} {\ sqrt {2 \ pi}}} \ int _ {\ mathbb {R}} {\ hat {x}} (\ omega) e ^ {\ mathrm {i} \ omega t} \, d \ omega = {\ frac {1} {2 \ pi}} \ sum _ {n \ in \ mathbb {Z}} x (n) \ int _ {- \ pi} ^ {\ pi} e ^ {\ mathrm {i} \ omega (tn)} \, d \ omega = \ sum _ {n \ in \ mathbb {Z}} x (n) {\ frac {\ sin (\ pi (tn))} {\ pi (tn)}}}$ By skillfully using the general sampling formula , one can also get generalized cardinal series expansions, for example ${\ displaystyle x (t) = \ sum _ {n \ in \ mathbb {Z}} \ left (x (2n) + {\ dot {x}} (2n) (t-2n) \ right) \ \ operatorname {sinc} \ left (\ pi (t / 2-n) \ right) ^ {2}}$ , d. That is, the sampling rate is halved, for this two values ​​are taken at each sampling point, the function value and the first derivative. It is developed locally linearly, so to speak, and the developments are “glued together” by breaking down the one . Formulas with higher-order derivatives are not so easy to interpret. If f is band-limited to circular frequencies from the interval and if there are real numbers that are different in pairs , then the following applies ${\ displaystyle [-N \ pi; N \ pi]}$ ${\ displaystyle a_ {1}, \ ldots, a_ {N}}$ ${\ displaystyle f (x) = \ sum _ {n = - \ infty} ^ {\ infty} \ left (\ prod _ {i = 1} ^ {N} \ \ operatorname {sinc} (xn-a_ {i }) \ right) \ cdot \ left (\ sum _ {j = 1} ^ {N} f (n + a_ {j}) \ prod _ {k \ neq j, k = 1} ^ {N} {\ frac {xn-a_ {k}} {(a_ {j} -a_ {k}) \ \ operatorname {sinc} (a_ {j} -a_ {k})}} \ right)}$ The first factor in the summand is the core function of a decomposition of the one, the second factor is an interpolation polynomial , which looks similar to the Lagrange interpolation. If the a k is allowed to run to 0 simultaneously and is replaced by the Taylor polynomial of degree N -1 or greater, then any complex differential cardinal series results. ${\ displaystyle f (n + a _ {\ text {k}})}$ ## Low pass to prevent aliasing

If the sampling frequency is too small, ambiguities will occur in the digitized signal. These non-linear distortions are also known as the aliasing effect. Images may appear out of phase with shadows or new structures that are not in the original.

The lower limit of the sampling frequency for an analog signal of the bandwidth ${\ displaystyle f _ {\ mathrm {0}}}$ ${\ displaystyle f _ {\ mathrm {scan}} = 2 \, \ cdot \, f _ {\ mathrm {0}}}$ is also called the Nyquist rate . The highest frequency to be transmitted must therefore be less than half the sampling frequency, otherwise aliasing errors will occur. For this reason, higher frequencies are filtered out of the analog signal with a low pass. The aliasing errors are alias signals ( interference signals , pseudo signals ) which become noticeable as disruptive frequency components during the reconstruction. For example, if a sinusoidal signal with a frequency of 1600 Hz is digitized with a sampling frequency of 2000 Hz, a 400 Hz alias signal is obtained (2000–1600 Hz). On the other hand, if the sampling frequency exceeds 3200 Hz, there is no alias signal. A sampling frequency of 3300 Hz, for example, leads to a difference signal of 1700 Hz (3300–1600 Hz). However, this is greater than half the sampling rate and is therefore removed during the reconstruction by a low pass.

In practice there is no such thing as an ideal low pass . It always has a certain transition area between practically no attenuation in the pass band and practically complete attenuation in the stop band. Therefore, in practice, a modified formula is used to determine the sampling frequency:

Example:

${\ displaystyle f _ {\ mathrm {abtast}} \ approx 2 {,} 2 \ cdot f _ {\ mathrm {max}}}$ A signal is stored on a CD that is generated by digitizing an analog audio signal with frequencies of up to 20 kHz. The frequency at which the analog audio signal is sampled is 44.1 kHz.

The factor used depends on the low-pass filter used and the required attenuation of the alias signals. Other common factors are 2.4 ( DAT , DVD ) and 2.56 ( FFT analyzers)

## Oversampling

If you choose a higher sampling frequency, you will not get any additional information. However, the effort involved in processing, storing and transmitting increases. Nevertheless oversampling (English oversampling ) applied in practice. This is because if the useful bandwidth B is very close to half the sampling frequency, high demands are made on the slope of the low-pass filter . A sufficiently high attenuation in the stop band of a low-pass system can be achieved more easily with a higher sampling frequency than with a high-quality filter. The band limitation can then be shifted to a high order digital filter. In practice, an oversampling factor M  = 2 or M  = 4 is often chosen. Thus, one needs less steep analog filters before sampling. After the first sampling, a digital filter is then used before the subsequent sampling rate reduction, so that the sampling frequency is subsequently lowered. This digital filter is also known as a decimation filter . It can be implemented, for example, in the form of a cascaded integrator comb filter .

Expressed mathematically, an ideal low-pass filter has a square-wave function as the transfer function . This transfer function cuts off the spectrum in the frequency domain perfectly and the filtered signal can be perfectly reconstructed from the sampling points. However, an ideal low-pass filter cannot be implemented in practice because it is not causal and infinitely long.

This is why analog low-pass filters are used, which have a continuous, trapezoid-like transfer function and whose edges increase or decrease with a continuous, finite slope. These filters can be implemented in the form of Butterworth filters, for example . After the sampling, the digital smoothing and clocking down to the useful bandwidth takes place. The slope has an influence on the quality of the reconstructed signal.

## Subsampling

The condition f sample  > 2 ·  f max from the sampling theorem is a simplified representation, which is, however, very common and useful. Strictly speaking, instead of f max , there must be the bandwidth that is defined by the range between the lowest and highest frequency occurring in the signal. The bandwidth is only identical to f max in baseband signals; baseband signals are signals with low-frequency components in the vicinity of 0 Hz.

This realization led to a concept called bandpass undersampling (or sub-nyquist sampling ), which is used, for example, in digital radio technology. Suppose you want to receive all radio stations that broadcast between 88 and 108 MHz. If the sampling theorem is interpreted as described above, the sampling frequency should be above 216 MHz. In fact, the technique of undersampling only requires a sampling frequency of a little more than 40 MHz. The prerequisite for this is that all frequencies outside the frequency range of 88 to 108 MHz are removed from the signal using a bandpass filter before it is sampled . Sampling takes place at 44 MHz, for example, without the relevant range being converted by an analog mixer - the result is, so to speak, an alias signal and corresponds to the signal that would result from sampling a range converted to 0-22 MHz by a mixer.

In order to be able to at least approximately realize the necessary punctiform sampling in practice, the sample-and-hold circuit must be designed in such a way that the readout interval becomes as narrow as would be necessary for a sampling frequency of 220 MHz or more. This can be compared with a sampling at 220 MHz, of which only every fifth value is used, while the four intervening samples are discarded.

Commons : Nyquist Shannon theorem  - collection of images, videos and audio files

## literature

• Harry Nyquist : Certain Topics in Telegraph Transmission Theory. In: Transactions of the American Institute of Electrical Engineers. Vol. 47, 1928, , pp. 617-644 (reprinted in: Proceedings of the IEEE. Vol. 90, No. 2, 2002, , pp. 617-644).
• JR Higgins: Five short stories about the cardinal series. In: Bulletin of the American Mathematical Society. NS Vol. 12, No. 1, 1985, pp. 45-89.
• Michael Unser: Sampling - 50 Years after Shannon. In: Proceedings of the IEEE. Vol. 88, No. 4, 2000, pp. 569-587, ( online ).
• Wolfgang Wunderlich: Digital television HDTV, HDV, AVCHD for beginners and those switching . Auberge-tv Verlag, Hohen Neuendorf 2007, ISBN 978-3-00-023484-2 .