Wiener-Chinchin theorem

The Wiener-Chintschin-Theorem (also Wiener-Chintchin-Criterion or Chintschin-Kolmogorow-Theorem , after Alexander Chintschin , Norbert Wiener and Andrei Nikolajewitsch Kolmogorow ) is a sentence in the stochastics and signal processing . It says that the power spectral density of a stationary random process is the Fourier transform of the corresponding autocorrelation function .

The theorem also applies trivially, i.e. H. by inserting the Fourier transforms, which in this case exist differently than with random process signals, for the continuous functions of periodic signals , and can thus be applied to a periodic signal disturbed by noise .

Formulation in signal processing

For time-continuous signals the theorem has the form ( stands for the imaginary unit , for the frequency ): ${\ displaystyle \ mathrm {j}}$${\ displaystyle f}$

${\ displaystyle S_ {xx} (f) = \ int _ {- \ infty} ^ {\ infty} r_ {xx} (\ tau) \ mathrm {e} ^ {- \ mathrm {j} 2 \ pi f \ tau} d \ tau}$

with the autocorrelation function:

${\ displaystyle r_ {xx} (\ tau) = E \ left (x (t) \ cdot x ^ {*} (t + \ tau) \ right) = \ lim _ {T_ {F} \ to \ infty} { \ frac {1} {T_ {F}}} \ int _ {- T_ {F} / 2} ^ {T_ {F} / 2} x ^ {*} (t) \ cdot x (t + \ tau) dt }$

Here is the expected value of the product . ${\ displaystyle E}$${\ displaystyle x (t) \ cdot x ^ {*} (t + \ tau)}$

The power spectral density of the function is also defined, given the Fourier transform of the signal , as: ${\ displaystyle \, S_ {xx} (f)}$${\ displaystyle \, x (t)}$${\ displaystyle {\ hat {x}} (f)}$${\ displaystyle x (t)}$

${\ displaystyle S_ {xx} (f) = {\ left | {\ hat {x}} (f) \ right |} ^ {2}}$

The Fourier transform does not generally exist for "noise signals" . The name Power Spectral Density (PSD) comes from the fact that the signal is often a voltage and the autocorrelation function then provides energy. “Spectral density” means that the power is given as a function of the frequency per frequency interval. The PSD allows statements about the presence of periodicities in noisy signals. According to the Wiener-Chintchin theorem, the PSD can be obtained from the autocorrelation function. However, the autocorrelation function was used earlier for the detection of periodic signals in the background noise. B. by George Udny Yule in the 1920s. ${\ displaystyle {\ hat {x}} (f)}$${\ displaystyle x (t)}$

Conversely, the autocorrelation function also results as a Fourier inverse transform of the spectral power density:

${\ displaystyle r_ {xx} (\ tau) = \ int _ {- \ infty} ^ {\ infty} S_ {xx} (f) \ mathrm {e} ^ {\ mathrm {j} 2 \ pi f \ tau } df}$

Note : when formulating with the angular frequency, the corresponding formulas are: ${\ displaystyle \, \ omega = 2 \ pi f}$

${\ displaystyle S_ {xx} (\ omega) = \ int _ {- \ infty} ^ {+ \ infty} r_ {xx} (\ tau) \ mathrm {e} ^ {- \ mathrm {j} \ omega \ tau} d \ tau}$
${\ displaystyle r_ {xx} (\ tau) = {\ frac {1} {2 \ pi}} \ int _ {- \ infty} ^ {+ \ infty} S_ {xx} (\ omega) \ mathrm {e } ^ {\ mathrm {j} \ omega \ tau} d \ omega}$

This is actually the usual form of the Fourier transformation, here, as in signal theory, a formulation without angular frequency is chosen (see Fourier transformation ).

Using this theorem, calculations in the frequency domain can be exchanged for those in the time period, similar to the L p ergo set and the individual ergodic set or the ergodic hypothesis , which in typical systems of statistical mechanics states the interchangeability of time and ensemble means.

In the case of time-discrete signals (a time series with N terms) the Wiener-Chintschin theorem has a similar form:

${\ displaystyle S_ {xx} (f) = \ sum _ {k = - \ infty} ^ {\ infty} r_ {xx} (k) \ mathrm {e} ^ {- \ mathrm {j} 2 \ pi kf }}$

The sum is limited to a finite number of ( ) terms in applications . ${\ displaystyle p

Furthermore, the autocorrelation function and the power density spectrum of . ${\ displaystyle r_ {xx} (k) = E \ left (x ^ {*} (n) x (nk) \ right) = {\ frac {1} {N}} \ sum _ {n} ^ {N } x ^ {*} (n) x (nk)}$${\ displaystyle \, S_ {xx} (f)}$${\ displaystyle \, x (n)}$

Mathematical formulation

${\ displaystyle \ phi (u)}$is the characteristic function of a probability distribution with density function if and only if there is a function with ${\ displaystyle f}$${\ displaystyle x (t)}$

${\ displaystyle {\ Vert x \ Vert} ^ {2} = \ int _ {- \ infty} ^ {\ infty} x (t) x ^ {*} (t) dt = 1}$,

so that

${\ displaystyle \ phi (u) = \ int _ {- \ infty} ^ {\ infty} x (t) x ^ {*} (t + u) dt}$

The probability distribution is then given by ${\ displaystyle f}$

${\ displaystyle f = | {\ hat {x}} | ^ {2}}$

with the characteristic function of ; the latter corresponds to the Fourier transformation of . ${\ displaystyle {\ hat {x}}}$${\ displaystyle x}$${\ displaystyle x}$

The theorem is a special case of the Plancherel formula (also called Plancherel's theorem ).

Or in the original formulation of chintchin:

${\ displaystyle R (u) = \ int _ {- \ infty} ^ {\ infty} x (t) x ^ {*} (t + u) \, dt}$

is then and only then the correlation function of a real stationary random process , if ${\ displaystyle x (t)}$

${\ displaystyle R (u) = \ int _ {- \ infty} ^ {\ infty} \ cos (ut) \, dF (t)}$

with a distribution function . ${\ displaystyle F (t)}$

Application in systems analysis

The theorem allows linear time-invariant systems (LTI systems), such as electrical circuits with linear components , to be investigated if their input and output signals are not square-integrable and therefore no Fourier transforms exist, as in the case of random signals (noise).

According to the theory of LTI systems, the Fourier transform of the autocorrelation function of the output signal is equal to that of the input signal multiplied by the square of the magnitude of the frequency response , i.e. the Fourier transform of the impulse response of the system.

According to the Wiener-Chintchin theorem, the Fourier transform of the autocorrelation function is equal to the spectral power density, and thus the power density of the output signal is equal to the power density of the input signal, multiplied by the power transfer function , analogous to the case of periodic signals in LTIs.

3. A random process (a random function) is called stationary if the covariance is the same for all points in time . More precisely, these are stationary random processes in the broader sense " (Wide Sense Stationary Random Processes).${\ displaystyle x}$ ${\ displaystyle E \ left (x (t) x ^ {*} (t- \ tau) \ right)}$${\ displaystyle t}$