Correlation (signal processing)

A correlation (from the Middle Latin correlatio for " (the) interrelation") in signal processing or image processing describes a relationship between two or more temporal or spatial functions. However, there need not be a causal relationship between them . The correlation integral is the basis for how similar the functions to be examined are.

The functions can basically be continuous ( steady ) or discrete (sampled values). Examples of this can be functions over time or over place. Time functions are relevant for time signal processing. Functions using a position variable are particularly important in image processing . Signals as they occur in nature can be mapped or interpreted as functions.

Signal processing

In the case of time functions, for example, received electromagnetic waves are correlated with the transmitted electromagnetic waves. Here, temporal signal sequences are correlated (via the time variable  t ), and the wave spectrum can be arbitrary. One example is the processing of radar signals . Here z. B. correlates (compared) the received radar echo with the transmitted radar signal in order to see whether it is your own or an external radar echo (e.g. in the military sector). A similar example is the detection of microwave signals and their signal sequences, e.g. B. in cell phones and other devices with wireless communication.

The correlation of the time-based signals and information is often implemented in hardware or software on the respective electronics (e.g. FPGA, signal processor). A decisive factor for the computing speed is the respective time base . The shorter the time units, the greater the computing effort.

Image processing

The correlation is particularly important when processing one- or multi-dimensional data, e.g. B. Pictures of great importance. In image processing, however, the time factor (e.g.  t ) is simply replaced by a position variable (e.g.  x ). The image is interpreted as a signal sequence about the location. In contrast to time functions, images do not have a time base, but pixels , the so-called spatial frequencies . The spatial frequencies are, so to speak, the resolution of the image. When correlating two-dimensional images, two location variables instead of one must be used. During image processing, it can then be determined, for example by means of autocorrelation , whether or where a certain object is located in an image. This means that object recognition is possible.

In contrast to the correlation of one-dimensional signal sequences over time, the correlation of two-dimensional signal sequences (family photo, object recognition) requires a disproportionately higher time calculation effort. Depending on the resolution of the image, it takes seconds, hours or days. In conventional computers, this poses a major problem when the calculation of the correlation has to be carried out under the requirements of a real-time system.

Therefore the use of so-called optical computers , which use the advantages of Fourier optics , is particularly suitable for image processing of images with high resolution (e.g. 50 million × 50 million pixels) . The computing speed of an optical computer is extremely high and enables real-time requirements to be met. In this implementation, the computing speed is independent of the required image resolution. The image resolution itself is only limited by the diffraction limitation. The computing speed is calculated from the speed of light multiplied by the actual length of the optical computer and taking into account the processing speeds of the input and output electronics that may be required.

One application of the correlation of images is, for example, the recognition of certain objects or structures. Structures can be: the pattern of a bulk material (size or shape), the checked pattern of a tablecloth, cancer cells, functional or non-functional blood cells. This application is very interesting if the task is to recognize or sort objects (good or bad or arrangement according to size). Example: Detection of whether a banknote is real or forged.

Furthermore, images can themselves be changed with the aid of the correlation by filtering out certain structures (harmonics). Example: Removing the picture elements of a television screen in order to obtain a smooth picture. In order to remove the image information from the screen mask, all frequency components that make up the pixels are filtered out. What remains is an image without pixels. A subsequent analysis provides information about which frequency components are missing.

Another application would be to increase or decrease the resolution of an image (up to a maximum of the resolution that the optical recording system can physically produce).

Quantitative description

Here the connections from the point of view of signal processing and signal analysis with continuous signals are to be described.

The correlation integral

The correlation is described mathematically by the correlation integral for time functions:

${\ displaystyle \ rho (\ tau) = K \ int _ {- \ infty} ^ {\ infty} x (t) m (t + \ tau) {\ rm {d}} t}$

The following applies to complex time functions:

${\ displaystyle \ rho (\ tau) = K \ int _ {- \ infty} ^ {\ infty} x ^ {*} (t) m (t + \ tau) {\ rm {d}} t}$

The value K and the integral limits must be adapted to the corresponding functions:

${\ displaystyle K = {\ begin {cases} 1 & {\ text {for aperiodic signals}} \\ {\ frac {1} {2T}} & {\ text {for periodic signals, the integral limits then run from}} - T {\ text {bis}} T \\\ lim \ limits _ {T \ to \ infty} {\ frac {1} {2T}} & {\ text {for random signals}} \ end {cases}}}$

x (t) is the function to be analyzed, m (t) is the pattern function.

Pattern function m ( t )

m (t) can be any pattern function. However, it should be adapted sensibly.

The correlation integral changes into other signal transformations depending on the pattern function : ${\ displaystyle m (t)}$

Correlation factor as a measure of the similarity of two signals

The similarity of two signals is first described using two real-valued energy signals, then using two real-valued power signals . The complex-valued signals are not dealt with further here.

The signal energy E s of a real-valued signal s is calculated as follows

${\ displaystyle E_ {s} = \ int _ {- \ infty} ^ {\ infty} s ^ {2} (t) {\ rm {d}} t.}$

If one considers composite signals s (t) = x (t) + y (t) , this leads to the equation

{\ displaystyle {\ begin {aligned} E_ {s} & = \ int _ {- \ infty} ^ {\ infty} s ^ {2} (t) {\ rm {d}} t \\ & = \ int _ {- \ infty} ^ {\ infty} (x (t) + y (t)) ^ {2} {\ rm {d}} t \\ & = \ underbrace {\ int _ {- \ infty} ^ {\ infty} x ^ {2} (t) {\ rm {d}} t} _ {E_ {x}} + \ underbrace {\ int _ {- \ infty} ^ {\ infty} y ^ {2} (t) {\ rm {d}} t} _ {E_ {y}} + \ underbrace {2 \ int _ {- \ infty} ^ {\ infty} x (t) y (t) {\ rm {d }} t} _ {2E_ {xy}}. \ end {aligned}}}

${\ displaystyle E_ {x}}$is the energy of x , and is the energy of y . The size is called cross energy. It can be positive, negative, or zero. ${\ displaystyle E_ {y}}$${\ displaystyle E_ {xy}}$

It is useful to calculate the intersection energy with the signal energies via the equation

${\ displaystyle E_ {xy} = \ rho {\ sqrt {E_ {x} E_ {y}}}}$

related.

The factor is the correlation factor , also called the correlation coefficient . The following always applies to him: what can be proven from analysis with the help of the Cauchy-Schwarz inequality . ${\ displaystyle \ rho}$${\ displaystyle \ rho ^ {2} \ leq 1}$

According to the explanations just given, the energy of the overall signal depends on the signal energy of x , the signal energy of y and the correlation factor . ${\ displaystyle \ rho}$

The correlation factor has the value when the signal x (t) is correlated with the signal . In this case the signal is called co-rotating. The signal energy of the total signal is maximum. ${\ displaystyle \ rho = 1}$${\ displaystyle y (t) = | k | x (t)}$

The correlation factor has the value when the signal x (t) is correlated with the signal . In this case, the signal is called opposite. The signal energy of the overall signal is minimal. ${\ displaystyle \ rho = -1}$${\ displaystyle y (t) = - | k | x (t)}$

A peculiarity occurs when the correlation factor assumes the value . Both signals are then called orthogonal (with energy signals one can also say: uncorrelated). ${\ displaystyle \ rho = 0}$

As is clear from the examples, the correlation factor is a measure of how similar two signals are.

Similar relationships can be found for power signals. For the signal power of a signal results ${\ displaystyle P_ {s}}$${\ displaystyle s (t) = x (t) + y (t)}$

{\ displaystyle {\ begin {aligned} P_ {s} & = \ lim _ {T \ to \ infty} {\ frac {1} {2T}} \ int _ {- T} ^ {T} s ^ {2 } (t) {\ rm {d}} t \\ & = \ lim _ {T \ to \ infty} {\ frac {1} {2T}} \ int _ {- T} ^ {T} (x ( t) + y (t)) ^ {2} {\ rm {d}} t \\ & = \ underbrace {\ lim _ {T \ to \ infty} {\ frac {1} {2T}} \ int _ {-T} ^ {T} x ^ {2} (t) {\ rm {d}} t} _ {P_ {x}} + \ underbrace {\ lim _ {T \ to \ infty} {\ frac { 1} {2T}} \ int _ {- T} ^ {T} y ^ {2} (t) {\ rm {d}} t} _ {P_ {y}} + \ underbrace {2 \ lim _ { T \ to \ infty} {\ frac {1} {2T}} \ int _ {- T} ^ {T} x (t) y (t) {\ rm {d}} t} _ {2P_ {xy} }. \ end {aligned}}}

Here, the cross power factor determines the degree to which the two signals agree. For one calls both signals orthogonal. The larger it is, the greater the probability that both signals have something to do with each other. ${\ displaystyle {\ overline {\ rho}} = {\ tfrac {P_ {xy}} {\ sqrt {P_ {x} P_ {y}}}}}}$${\ displaystyle {\ overline {\ rho}} = 0}$${\ displaystyle {\ overline {\ rho}} ^ {2}}$

Autocorrelation function

In signal processing, the cross-correlation function of a signal with itself, the so-called autocorrelation function (AKF) , is used for various applications . It describes the similarity of a signal to itself.

It is calculated for a real-valued power signal

${\ displaystyle \ Psi _ {xx} (\ tau) = \ lim \ limits _ {T \ to \ infty} {{\ frac {1} {2T}} \ int \ limits _ {- T} ^ {T} {x (t) \ cdot x (t + \ tau) {\ rm {d}} t}}}$.

In the case of energy signals, the result is similar

${\ displaystyle \ Psi _ {xx} ^ {E} (\ tau) = {\ int \ limits _ {- \ infty} ^ {\ infty} {x (t) \ cdot x (t + \ tau) {\ rm {d}} t}}}$.

With complex-valued signals the following results:

${\ displaystyle {\ underline {\ Psi}} _ {xx} (\ tau) = \ lim \ limits _ {T \ to \ infty} {{\ frac {1} {2T}} \ int \ limits _ {- T} ^ {T} {{\ underline {x}} ^ {*} (t) \ cdot {\ underline {x}} (t + \ tau) {\ rm {d}} t}}}$.

and

${\ displaystyle {\ underline {\ Psi}} _ {xx} ^ {E} = {\ int \ limits _ {- \ infty} ^ {\ infty} {{\ underline {x}} ^ {*} (t ) \ cdot {\ underline {x}} (t + \ tau) {\ rm {d}} t}},}$

where the asterisk means the complex conjugate number .

Applications

Application in image processing

In image processing , correlation functions are used, among other things, for the precise localization of a pattern (the pattern function in the sense of mathematical correlation) in an image. This method can e.g. B. used to evaluate stereo image pairs . In order to be able to calculate the spatial coordinate of a point, there must be a clear assignment of objects in the left image to the objects in the right image. To do this, you take a small section from one picture - the pattern - and correlate it two-dimensionally with the other picture. The coordinates of an object point or feature obtained in this way in the left and right image can be converted into spatial coordinates using photogrammetry methods .

Four photos

The photos in the sequence on the left show a young woman, her negative image, Nietzsche and a random noise pattern.

To test whether the photo of the young woman can also be found in the noisy images, all four images were first superimposed with white Gaussian noise and then correlated with her photo (first photo in the sequence of images).

Correlation images (see text)

The result can be seen in the image composition on the right. The original images are indistinct. To the right of them are the correlation calculations.

Nietzsche's correlation picture shows little agreement with that of the young woman, the noise pattern almost none at all. The positive and negative correlation with the pictures showing the woman and her negative can be clearly seen.

In a broader sense, so-called optical computers (Fourier optics, 4f) are based on the correlation. The systems, also known as Fourier correlators, correlate images with the help of holograms. With 4f optics, the correlation in the frequency domain can be achieved through a purely physical process at almost the speed of light. An optical bench with a corresponding lens system and Fourier lens is required.

Application in clay processing

In stereophony, the correlation describes the similarity of signals. The normalized correlation factor or correlation coefficient is a measure of the similarity of two signals and is calculated in a simplified manner from the largest possible time integral of the amplitude difference between these two signals. It is approximately indicated by degree of correlation meters, although in practice these only examine a phase reference with a very short integration time of less than one second.

When meter is in the sound equipment of the correlation meter or goniometer used.

Application in multi-channel signal transmission

In the communications engineering methods are used, for economic reasons, a plurality of mutually independent signals (eg. As the telephone signals of many users or to the video and audio signals of many television or audio broadcasting transmitter) via the same transmission medium (wire, cable , radio links, fiber optic ) transferred to. In order to be able to “unbundle” such channel bundles again without interference after transmission, ie on the receiving end, the individual signals must be “distinguishable”. This means that the individual signals combined to form a bundle at the transmitter end must be orthogonal to one another (each with each other), which is the reason why z. B. only a limited number of channels are available in a radio frequency band . The orthogonality condition is trivially fulfilled in two cases, namely when the individual signals do not overlap spectrally or temporally. In this case, the product of the spectra of the individual signals or the product of the time functions of the individual signals is equal to zero, so the correlation integral is also equal to zero. The technically simple implementation of these two cases is therefore the frequency division multiplex and the time division multiplex method . With modern microelectronic technologies, however, it has now also become possible to (de) multiplex orthogonal signals that overlap each other both spectrally and temporally ( code division multiplexing technology ). Here, the correlation function must actually be determined on the receiver side. This is usually done through the use of an optimal filter .

Application in signal reception

In every type of signal transmission, various types of interference occur, at least due to the thermal noise of electrons or photons, which are necessary for electrical or optical signal transmission. On the receiver side, the problem arises of separating the signal from the interference signal with maximum security . Theoretically, this works better, the more the receiver knows about the transmitted signal (e.g. about the time course of a transmitted radar signal). In this case the correlation reception, i.e. H. the receiver-side correlation of the incoming disturbed signal with a pattern of the transmitted signal the theoretically optimal solution of the receiver implementation. This principle is not only used in radar technology , but also in nature, for example in the ultrasonic location of bats.