The Wiener-Filter or Wiener-Kolmogoroff-Filter is a filter for signal processing , which was developed in the 1940s by Norbert Wiener and Andrei Nikolajewitsch Kolmogorow independently and published in 1949 by Norbert Wiener. Measured by the mean square deviation , it performs optimal noise suppression .
Use of the Wiener filter for noise suppression. (left: original, middle: noisy image, right: filtered image)
properties
The Wiener filter is described by the following properties:
Prerequisite: The signal and the additive noise are the same stochastic processes with known spectral distribution or known autocorrelation and cross-correlation
Error criterion: Minimum mean square deviation
Model properties
A signal disturbed by additive noise is assumed as the input signal of the Wiener filter .
s
(
t
)
{\ displaystyle s \ left (t \ right)}
n
(
t
)
{\ displaystyle n \ left (t \ right)}
y
(
t
)
=
s
(
t
)
+
n
(
t
)
{\ displaystyle y (t) = s (t) + n (t)}
The output signal results from the convolution of the input signal with the filter function :
x
(
t
)
{\ displaystyle x \ left (t \ right)}
G
(
τ
)
{\ displaystyle g \ left (\ tau \ right)}
x
(
t
)
=
G
(
τ
)
∗
y
(
t
)
=
G
(
τ
)
∗
(
s
(
t
)
+
n
(
t
)
)
{\ displaystyle x (t) = g (\ tau) * y (t) = g (\ tau) * \ left (s (t) + n (t) \ right)}
Error and square error result from the deviation of the output signal from the time-shifted input signal . Depending on the value d of the time offset, different problems can be considered:
e
(
t
)
=
s
(
t
+
d
)
-
x
(
t
)
{\ displaystyle e (t) = s \ left (t + d \ right) -x (t)}
e
2
(
t
)
=
s
2
(
t
+
d
)
-
2
s
(
t
+
d
)
x
(
t
)
+
x
2
(
t
)
{\ displaystyle e ^ {2} (t) = s ^ {2} \ left (t + d \ right) -2s (t + d) x (t) + x ^ {2} (t)}
s
(
t
+
d
)
{\ displaystyle s \ left (t + d \ right)}
For : prediction
d
>
0
{\ displaystyle \ left.d> 0 \ right.}
For : filtering
d
=
0
{\ displaystyle \ left.d = 0 \ right.}
For : smoothing
d
<
0
{\ displaystyle \ left.d <0 \ right.}
If one represents as a convolution integral:
x
(
t
)
{\ displaystyle x \ left (t \ right)}
x
(
t
)
=
∫
-
∞
∞
G
(
τ
)
[
s
(
t
-
τ
)
+
n
(
t
-
τ
)
]
d
τ
{\ displaystyle x (t) = \ int \ limits _ {- \ infty} ^ {\ infty} {g (\ tau) \ left [s (t- \ tau) + n (t- \ tau) \ right] d \ tau}}
,
the expected value of the quadratic error results in:
E.
(
e
2
)
=
R.
s
(
0
)
-
2
∫
-
∞
∞
G
(
τ
)
R.
y
s
(
τ
+
d
)
d
τ
+
∫
-
∞
∞
∫
-
∞
∞
G
(
τ
)
G
(
θ
)
R.
y
(
τ
-
θ
)
d
τ
d
θ
{\ displaystyle E (e ^ {2}) = R_ {s} (0) -2 \ int \ limits _ {- \ infty} ^ {\ infty} {g (\ tau) R_ {y \, s} ( \ tau + d) d \ tau} + \ int \ limits _ {- \ infty} ^ {\ infty} {\ int \ limits _ {- \ infty} ^ {\ infty} {g (\ tau) g (\ theta) R_ {y} (\ tau - \ theta) d \ tau} d \ theta}}
in which
R.
s
{\ displaystyle R_ {s}}
the autocorrelation of the function
s
(
t
)
{\ displaystyle \ left.s (t) \ right.}
R.
y
{\ displaystyle R_ {y}}
the autocorrelation of the function
y
(
t
)
{\ displaystyle \ left.y (t) \ right.}
R.
y
s
{\ displaystyle R_ {y \, s}}
the cross-correlation of the functions and are
y
(
t
)
{\ displaystyle \ left.y (t) \ right.}
s
(
t
)
{\ displaystyle \ left.s (t) \ right.}
If the signal and the noise are uncorrelated (and thus the cross-correlation is equal to zero), the following simplifications result
s
(
t
)
{\ displaystyle s \ left (t \ right)}
n
(
t
)
{\ displaystyle n \ left (t \ right)}
R.
y
s
=
R.
s
{\ displaystyle R_ {y \, s} = R_ {s}}
R.
y
=
R.
s
+
R.
n
{\ displaystyle R_ {y} = R_ {s} + R_ {n}}
The goal now is to minimize by determining an optimal one.
E.
(
e
2
)
{\ displaystyle \ left.E (e ^ {2}) \ right.}
G
(
τ
)
{\ displaystyle g \ left (\ tau \ right)}
Stationary solutions
The Wiener filter has a solution for the causal and the non-causal case.
Non-causal solution
G
(
s
)
=
S.
x
,
s
(
s
)
e
α
s
S.
x
(
s
)
{\ displaystyle G (s) = {\ frac {S_ {x, s} (s) e ^ {\ alpha s}} {S_ {x} (s)}}}
With the proviso that is optimal simplifies the equation that the minimum of the mean square error ( Minimum Mean-Square Error describes MMSE) to
G
(
t
)
{\ displaystyle g \ left (t \ right)}
E.
(
e
2
)
=
R.
s
(
0
)
-
∫
-
∞
∞
G
(
τ
)
R.
x
,
s
(
τ
+
d
)
d
τ
{\ displaystyle E (e ^ {2}) = R_ {s} (0) - \ int \ limits _ {- \ infty} ^ {\ infty} {g (\ tau) R_ {x, s} (\ tau + d) d \ tau}}
.
The solution is the inverse two-sided Laplace transformation of .
G
(
t
)
{\ displaystyle g \ left (t \ right)}
G
(
s
)
{\ displaystyle \ left.G (s) \ right.}
Causal solution
G
(
s
)
=
H
(
s
)
S.
x
+
(
s
)
{\ displaystyle G (s) = {\ frac {H (s)} {S_ {x} ^ {+} (s)}}}
In which
H
(
s
)
{\ displaystyle \ left.H (s) \ right.}
the positive solution of the inverse Laplace transform of ,
S.
x
,
s
(
s
)
e
α
s
S.
x
-
(
s
)
{\ displaystyle {\ frac {S_ {x, s} (s) e ^ {\ alpha s}} {S_ {x} ^ {-} (s)}}}
S.
x
+
(
s
)
{\ displaystyle S_ {x} ^ {+} (s)}
the positive solution of the inverse Laplace transform of and
S.
x
(
s
)
{\ displaystyle \ left.S_ {x} (s) \ right.}
S.
x
-
(
s
)
{\ displaystyle S_ {x} ^ {-} (s)}
is the negative solution of the inverse Laplace transform of .
S.
x
(
s
)
{\ displaystyle \ left.S_ {x} (s) \ right.}
See also
Individual evidence
↑ Kristian Kroschel: Statistical Message Theory . Signal and pattern recognition, parameter and signal estimation. 3rd, revised and expanded edition. Springer, Berlin et al. 1996, ISBN 3-540-61306-4 .
^ A b Norbert Wiener : Extrapolation, Interpolation, and Smoothing of Stationary Time Series. Wiley, New York NY 1949.
^ Robert Grover Brown, Patrick YC Hwang: Introduction to Random Signals and Applied Kalman Filtering. With MATLAB exercises and solutions. 3. Edition. Wiley et al., New York NY 1996, ISBN 0-471-12839-2 .
<img src="https://de.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;">