# Hair wavelet

Hair wavelet

The Haar wavelet is the first wavelet to become known in literature and was introduced by Alfréd Haar in 1909 . It is also the simplest known wavelet and can be formed from the combination of two rectangular functions .

The advantage of the Haar wavelet is that the associated wavelet transformation can be easily implemented as a fast wavelet transformation (FWT). The disadvantage of the Haar wavelet is that it is discontinuous and therefore not differentiable .

## The functions of the Haar wavelet basis

### Scaling function

The scaling function or “father wavelet” function of the Haar wavelet base is the indicator function of the interval . ${\ displaystyle [0,1)}$

${\ displaystyle \ phi (x) = \ chi _ {[0,1)} (x) = {\ begin {cases} 1 & 0 \ leq x <1 \\ 0 & {\ mbox {otherwise}} \ end {cases} }}$

It satisfies the functional equation

${\ displaystyle \ phi (x) = \ phi (2x) + \ phi (2x-1) = {\ sqrt {2}} \ left (a_ {0} \ phi (2x) + a_ {1} \ phi ( 2x-1) \ right)}$with .${\ displaystyle a_ {0} = a_ {1} = {\ frac {1} {\ sqrt {2}}}}$

### Wavelet function

The wavelet function is the "pushed together" difference between two successive scaling functions:

${\ displaystyle \ psi (x) = \ phi (2x) - \ phi (2x-1) = {\ sqrt {2}} \ left (b_ {0} \ phi (2x) + b_ {1} \ phi ( 2x-1) \ right) = {\ begin {cases} 1 & 0 \ leq x <1/2 \\ - 1 & 1/2 \ leq x <1 \\ 0 & {\ mbox {otherwise}} \ end {cases}}}$,

whereby . ${\ displaystyle (b_ {0}, b_ {1}) = ({\ tfrac {1} {\ sqrt {2}}}, - {\ tfrac {1} {\ sqrt {2}}})}$

The notation with a prefactor ensures that the matrix

${\ displaystyle H = {\ begin {pmatrix} a_ {0} & a_ {1} \\ b_ {0} & b_ {1} \ end {pmatrix}} = {\ frac {1} {\ sqrt {2}}} \, {\ begin {pmatrix} 1 & 1 \\ 1 & -1 \ end {pmatrix}}}$

is an orthogonal matrix . This is part of the conditions that orthogonal wavelets require.

### Multi-scale analysis

This function generates the multi-scale analysis of the step functions. In this, each function with "finite energy" is assigned the following projection on each scale : ${\ displaystyle f \ in L ^ {2} (\ mathbb {R})}$${\ displaystyle J \ in \ mathbb {Z}}$

${\ displaystyle f \ mapsto P_ {J} (f)}$with .${\ displaystyle P_ {J} (f) (x) = \ sum _ {n \ in \ mathbb {Z}} \ left (\ int _ {0} ^ {1} f \ left (2 ^ {- J} (n + t) \ right) \, \ mathrm {d} t \ right) \ cdot \ phi (2 ^ {J} xn)}$

The difference between two scales can then be expressed by the "mother wavelet" or the actual wavelet function:

${\ displaystyle P_ {J + 1} (f) (x) -P_ {J} (f) (x) = \ sum _ {n \ in \ mathbb {Z}} \ left (\ int _ {0} ^ {1} f \ left (2 ^ {- J-1} (2n + t) \ right) \, \ mathrm {d} t- \ int _ {0} ^ {1} f \ left (2 ^ {- J-1} (2n + 1 + t) \ right) \, \ mathrm {d} t \ right) \ cdot \ psi (2 ^ {J} xn)}$.

With and as functions in the Hilbert space applies ${\ displaystyle \ phi _ {j, k} (x) = {{\ sqrt {2}} \,} ^ {j} \ phi ({2 \,} ^ {j} xk)}$${\ displaystyle \ psi _ {j, k} (x) = {{\ sqrt {2}} \,} ^ {j} \ psi ({2 \,} ^ {j} xk)}$ ${\ displaystyle L ^ {2} (\ mathbb {R})}$

• all these functions have -standard 1,${\ displaystyle L ^ {2}}$
• ${\ displaystyle \ phi _ {j, k}}$is perpendicular to if ,${\ displaystyle \ phi _ {j, l}}$${\ displaystyle k \ not = l}$
• ${\ displaystyle \ psi _ {i, k}}$is perpendicular to if or ,${\ displaystyle \ psi _ {j, l}}$${\ displaystyle i \ not = j}$${\ displaystyle k \ not = l}$
• which form a Hilbert base of .${\ displaystyle \ psi _ {i, k}}$${\ displaystyle L ^ {2} (\ mathbb {R})}$

## Fast Haar wavelet transform

A discrete signal f is given , which is given by a finite or square summable sequence

${\ displaystyle f = (\ dots, f _ {- 2}, f _ {- 1}, f_ {0}, f_ {1}, f_ {2}, f_ {3}, \ dots)}$

is shown. The staircase function is to him as a continuous signal

${\ displaystyle F (x) = \ dots + f _ {- 1} \ phi _ {0, -1} (x) + f_ {0} \ phi _ {0,0} (x) + f_ {1} \ phi _ {0.1} (x) + f_ {2} \ phi _ {0.2} (x) + \ dots}$

assigned.

### Forward transformation

A vector-valued signal, the so-called polyphase decomposition, is generated from the discrete signal by placing it vertically in pairs:

${\ displaystyle f_ {p} = \ left (\ dots, \ left ({f _ {- 2} \ atop f _ {- 1}} \ right), \ left ({f_ {0} \ atop f_ {1}} \ right), \ left ({f_ {2} \ atop f_ {3}} \ right), \ dots \ right)}$.

This is now multiplied by the hair transformation matrix ${\ displaystyle H: = {\ frac {1} {\ sqrt {2}}} {\ begin {pmatrix} 1 & 1 \\ 1 & -1 \ end {pmatrix}}}$

${\ displaystyle \ left ({s \ atop d} \ right) =: Hf_ {p} = \ left (\ dots, \ left ({s _ {- 1} \ atop d _ {- 1}} \ right), \ left ({s_ {0} \ atop d_ {0}} \ right), \ left ({s_ {1} \ atop d_ {1}} \ right), \ dots \ right)}$,

is included and . ${\ displaystyle s_ {k} = {\ frac {f_ {2k + 1} + f_ {2k}} {\ sqrt {2}}}}$${\ displaystyle d_ {k} = {\ frac {f_ {2k + 1} -f_ {2k}} {\ sqrt {2}}}}$

### Inverse transformation

We get a mean value signal and a difference signal, from which the output signal can be recovered by simply reversing the steps taken: ${\ displaystyle s}$${\ displaystyle d}$

${\ displaystyle f_ {2k} = {\ frac {s_ {k} -d_ {k}} {\ sqrt {2}}}}$ and ${\ displaystyle f_ {2k + 1} = {\ frac {s_ {k} + d_ {k}} {\ sqrt {2}}}}$

If the fluctuation from element to element in the output signal is limited by a small one, the fluctuation in is limited by, i.e. still small, the size of the elements in is limited by . A smooth signal is broken down into a still smooth signal of half the sampling frequency and a small difference signal. This is the starting point for wavelet compression . ${\ displaystyle \ epsilon> 0}$${\ displaystyle s}$${\ displaystyle {\ sqrt {2}} \ epsilon}$${\ displaystyle d}$${\ displaystyle \ epsilon / {\ sqrt {2}}}$

### Recursive filter bank

We can repeat the process by declaring s to be the output signal and decomposing it with the above procedure, we get a sequence of decompositions , has one -th of the original sampling frequency and one due to limited fluctuation, also has one -th of the original sampling frequency and due to limited terms . ${\ displaystyle s ^ {0}: = f, \; (s ^ {1}, d ^ {1}), \; (s ^ {2}, d ^ {2}, d ^ {1}), \; \ dots, (s ^ {T}, d ^ {T}, \ dots, d ^ {2}, d ^ {1})}$${\ displaystyle s ^ {k}}$${\ displaystyle 2 ^ {k}}$${\ displaystyle 2 ^ {k / 2} \ epsilon}$${\ displaystyle d ^ {k}}$${\ displaystyle 2 ^ {k}}$${\ displaystyle 2 ^ {- k / 2} \ epsilon}$

### interpretation

As a discrete signal a real sequence is mostly about looking with finite energy, ${\ displaystyle f}$${\ displaystyle (f_ {n})}$${\ displaystyle Z}$

${\ displaystyle \ sum _ {n = - \ infty} ^ {\ infty} \, | f_ {n} | ^ {2} <\ infty}$.

Among these there are some very simple sequences δ n , called Kronecker or Dirac delta, one for each . , Applies to their followers that the respective th the value has , and all the value , if . ${\ displaystyle n \ in Z}$${\ displaystyle n}$${\ displaystyle 1}$${\ displaystyle \ delta ^ {n} {} _ {n} = 1}$${\ displaystyle 0}$${\ displaystyle \ delta ^ {n} {} _ {k} = 0}$${\ displaystyle k \ not = n}$

Now we can trivially write every signal as a series in signal space

${\ displaystyle f = \ sum _ {n = - \ infty} ^ {\ infty} \, f_ {n} \ cdot \ delta ^ {n}}$

or as the sum of two rows

${\ displaystyle f = \ sum _ {n = - \ infty} ^ {\ infty} \, f_ {2n} \ cdot \ delta ^ {2n} + \ sum _ {n = - \ infty} ^ {\ infty} \, f_ {2n + 1} \ cdot \ delta ^ {2n + 1}}$.

In many practically relevant signal classes, e.g. B. with oversampled band-limited continuous signals, values ​​of adjacent sequence members are also adjacent, ie generally lie and close together, relative to their absolute value. However, this is not taken into account at all in the above series. The mean value and difference of and would express their similarity more strongly, the mean value is similar to both values ​​and the difference is small. Let's use identity ${\ displaystyle f_ {2n}}$${\ displaystyle f_ {2n + 1}}$${\ displaystyle f_ {2n}}$${\ displaystyle f_ {2n + 1}}$

${\ displaystyle ac + bd = {\ frac {1} {2}} (a + b) (c + d) + {\ frac {1} {2}} (ab) (cd)}$

in order to summarize neighboring terms in the first row or corresponding terms in the second breakdown into (scaled) mean values ​​and differences:

${\ displaystyle f = \ sum _ {n = - \ infty} ^ {\ infty} \, {\ frac {f_ {2n} + f_ {2n + 1}} {\ sqrt {2}}} \ cdot {\ frac {\ delta ^ {2n} + \ delta ^ {2n + 1}} {\ sqrt {2}}} + \ sum _ {n = - \ infty} ^ {\ infty} \, {\ frac {f_ { 2n} -f_ {2n + 1}} {\ sqrt {2}}} \ cdot {\ frac {\ delta ^ {2n} - \ delta ^ {2n + 1}} {\ sqrt {2}}}}$

Now we are introducing new names:

• the new basic sequences
${\ displaystyle a ^ {n}: = {\ frac {\ delta ^ {2n} + \ delta ^ {2n + 1}} {\ sqrt {2}}}}$ and ${\ displaystyle b ^ {n}: = {\ frac {\ delta ^ {2n} - \ delta ^ {2n + 1}} {\ sqrt {2}}}}$
• with the new transformed coefficients
${\ displaystyle s_ {n}: = {\ frac {f_ {2n} + f_ {2n + 1}} {\ sqrt {2}}}}$and .${\ displaystyle d_ {n}: = {\ frac {f_ {2n} -f_ {2n + 1}} {\ sqrt {2}}}}$

We thus obtain the decomposition of the Haar wavelet transform

${\ displaystyle f = \ sum _ {n = - \ infty} ^ {\ infty} \, s_ {n} \ cdot a ^ {n} + \ sum _ {n = - \ infty} ^ {\ infty} \ , d_ {n} \ cdot b ^ {n}}$.

and by means of the infinite Euclidean scalar product we can write

${\ displaystyle s_ {n} = \ langle f, \, a ^ {n} \ rangle}$and .${\ displaystyle d_ {n} = \ langle f, \, b ^ {n} \ rangle}$

The last three identities describe a "Conjugate Quadrature Filterbank (CQF)" , which can also be defined for more general basic sequences and . The basic sequences are all created by shifting around the respective out , those through shifting out . More on this in the article Daubechies wavelets . ${\ displaystyle a ^ {n}}$${\ displaystyle b ^ {n}}$${\ displaystyle a ^ {n}}$${\ displaystyle 2n}$${\ displaystyle a ^ {0}}$${\ displaystyle b ^ {n}}$${\ displaystyle b ^ {0}}$

The sequence now contains a smoothed version of the output signal at half the sampling rate , so you can also decompose according to this rule and continue this procedure recursively over a certain depth. The tuples are thus successively generated from an output signal${\ displaystyle s = (s ^ {n})}$${\ displaystyle s}$${\ displaystyle s ^ {0} = f}$

${\ displaystyle (s ^ {1}, d ^ {1})}$, , , ...${\ displaystyle (s ^ {2}, d ^ {2}, d ^ {1})}$${\ displaystyle (s ^ {3}, d ^ {3}, d ^ {2}, d ^ {1})}$

If finite , i.e. almost everywhere zero, with length , then the consequences in the decomposition essentially have, i. H. except for additive constants, the lengths ${\ displaystyle f}$${\ displaystyle N}$

${\ displaystyle \ left ({\ tfrac {N} {2}}, {\ tfrac {N} {2}} \ right)}$, , , ...${\ displaystyle \ left ({\ tfrac {N} {4}}, {\ tfrac {N} {4}}, {\ tfrac {N} {2}} \ right)}$${\ displaystyle \ left ({\ tfrac {N} {8}}, {\ tfrac {N} {8}}, {\ tfrac {N} {4}}, {\ tfrac {N} {2}} \ right)}$

so that the total number of essential coefficients is preserved. The consequences in the decomposition are usually better suited for further processing such as compression or the search for certain features than the raw output signal.

## Modifications

The polyphase decomposition of the output signal can also take place in a block size s other than 2; the corresponding Haar matrix must be required to be an orthogonal matrix and that its first line consists only of entries . This requirement is met by the matrices of the discrete cosine transformation and that of the Walsh-Hadamard transformation . ${\ displaystyle 1 / {\ sqrt {s}}}$

The Haar wavelet transformation corresponds to a discrete cosine transformation to the block size , which is applied in the image = pixel rectangle one after the other in the horizontal and vertical direction. ${\ displaystyle s = 2}$