# Convolution (math)

In functional analysis , a sub-area of mathematics , the convolution , also called convolution (from the Latin convolvere "to roll up"), describes a mathematical operator that provides two functions and a third function . ${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle f \ ast g}$

The convolution clearly means that each value of is replaced by the weighted mean of the values ​​surrounding it. More specifically, for the average value of the function value with weighted. The resulting “overlay” between and mirrored and shifted versions of (one also speaks of a “smear” of ) can e.g. B. can be used to form a moving average . ${\ displaystyle f \ ast g}$${\ displaystyle f}$${\ displaystyle g}$ ${\ displaystyle (f \ ast g) (x)}$${\ displaystyle f (\ tau)}$${\ displaystyle g (x- \ tau)}$${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle f}$

## definition

### Convolution for functions ${\ displaystyle \ mathbb {R} ^ {n}}$

The convolution of two functions is defined by ${\ displaystyle f \ ast g}$${\ displaystyle f, g \ colon \ mathbb {R} ^ {n} \ to \ mathbb {C}}$

${\ displaystyle (f * g) (x): = \ int _ {\ mathbb {R} ^ {n}} f (\ tau) g (x- \ tau) \ mathrm {d} \ tau}$

In order to keep the definition as general as possible, one does not restrict the space of the admissible functions at first and instead demands that the integral for almost all values ​​of is well-defined. ${\ displaystyle x}$

In the case , i.e. for two integrable functions (in particular this means that the improper absolute value integral is finite), one can show that this requirement is always fulfilled . ${\ displaystyle f, g \ in {\ mathcal {L}} ^ {1} (\ mathbb {R} ^ {n})}$

### Convolution of periodic functions

For periodic functions and a real variable with a period , the convolution is defined as ${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle T> 0}$

${\ displaystyle (f \ ast g) (t) = {\ frac {1} {T}} \ int _ {a} ^ {a + T} f (\ tau) g (t- \ tau) \ mathrm { d} \ tau}$,

whereby the integration extends over any interval with period length . It is again a periodic function with period . ${\ displaystyle T}$${\ displaystyle f \ ast g}$${\ displaystyle T}$

### Convolution for functions on intervals

In the case of a limited domain , one continues and to the entire space in order to be able to carry out the convolution. There are several approaches to this, depending on the application. ${\ displaystyle \ mathbb {D}}$${\ displaystyle f}$${\ displaystyle g}$

Continuation through zero
It returns the functions by definition outside of the domain by the null function continued .${\ displaystyle f {\ Big |} _ {\ mathbb {R} ^ {n} \ setminus \ mathbb {D}} \ equiv 0}$
Periodic continuation
The functions are periodically continued outside the domain and the convolution defined for periodic functions is used.

In general, the convolution for such continued functions is no longer well-defined. An exception that often occurs is continuous functions with a compact carrier , which can be continued through zero to form an integrable function in . ${\ displaystyle f \ in C_ {c} (\ mathbb {D}) \ cap {\ mathcal {L}} ^ {1} (\ mathbb {D})}$${\ displaystyle {\ mathcal {L}} ^ {1} (\ mathbb {R} ^ {n})}$

## meaning

Convolution of the rectangular function with itself results in the triangular function .

A clear interpretation of the one-dimensional convolution is the weighting of one function that is dependent on time with another. The function value of the weight function at one point indicates how strong the order past , so the weighted value function in the value of the result function at the time arrives. ${\ displaystyle f}$${\ displaystyle \ tau}$${\ displaystyle \ tau}$ ${\ displaystyle g (t- \ tau)}$${\ displaystyle t}$

Convolution is a suitable model for describing numerous physical processes.

## Smoothing core

Convolution with the Gaussian function .

One way to " smooth out " a function is to fold it with something called a smooth core. The resulting function is smooth (infinitely often continuously differentiable), its support is only slightly larger than that of , and the deviation in the L1 norm can be limited by a given positive constant. ${\ displaystyle f}$ ${\ displaystyle j}$${\ displaystyle F = j * f}$${\ displaystyle f}$

A -dimensional smoothing kernel or Mollifier is an infinitely often continuously differentiable function , which is nonnegative, has its carrier in the closed unit sphere and has the integral 1, through the appropriate choice of a constant . ${\ displaystyle d}$${\ displaystyle j \ colon \ mathbb {R} ^ {d} \ to \ mathbb {R} _ {\ geq 0}}$${\ displaystyle B (0,1)}$${\ displaystyle c}$

One example is the smoothing kernel

${\ displaystyle j (x) = {\ begin {cases} c \ cdot \ exp \! \ left (- {\ frac {1} {1- | x | ^ {2}}} \ right), & | x | <1 \\ 0, & {\ text {otherwise}} \ end {cases}}}$

where is a normalization constant. ${\ displaystyle c = \ left [\ int _ {B (0,1)} \ exp \! \ left (- {\ frac {1} {1- | x | ^ {2}}} \ right) dx \ right] ^ {- 1} <\ infty}$

This function can be used to create further smoothing cores by setting for : ${\ displaystyle e \ in (0,1]}$

${\ displaystyle j_ {e} (x) = {\ frac {1} {e ^ {d}}} \ cdot j \ left ({\ frac {x} {e}} \ right),}$where for .${\ displaystyle j_ {e} (x) = 0}$${\ displaystyle | x |> e}$

Smoothing kernels j and j 1/2

## Examples

### Rectangular function

Be

${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}, \; x \ mapsto {\ begin {cases} 1 & -1 \ leq x \ leq 2 \\ 0 & \ mathrm {otherwise} \ end { cases}}}$.

By folding (shown in red) to the smoothing kernel is a smooth function is created (shown in blue) with compact support, of f in L 1 - standard deviates by about 0.4, d. H. ${\ displaystyle f}$${\ displaystyle j_ {1/2}}$${\ displaystyle F = f * j_ {1/2}}$

${\ displaystyle \ int _ {- \ infty} ^ {\ infty} | F (t) -f (t) | \ mathrm {d} t <0 {,} 4}$.

When convolution with for e less than 1/2 one obtains smooth functions which are even closer to f in the integral norm . ${\ displaystyle j_ {e}}$

### Normal distribution

If a normal distribution with the mean value and the standard deviation is convoluted with a second normal distribution with the parameters and , a normal distribution with the mean value and the standard deviation results again . ${\ displaystyle \ mu _ {1}}$${\ displaystyle \ sigma _ {1}}$${\ displaystyle \ mu _ {2}}$${\ displaystyle \ sigma _ {2}}$${\ displaystyle \ mu = \ mu _ {1} + \ mu _ {2}}$${\ displaystyle \ sigma = {\ sqrt {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}}}$

proof
${\ displaystyle \ int \ limits _ {- \ infty} ^ {\ infty} {\ frac {1} {{\ sqrt {2 \ pi}} \ sigma _ {1}}} e ^ {- {\ frac { (\ mathbf {\ xi} - \ mu _ {1}) ^ {2}} {2 \ sigma _ {1} ^ {2}}}} \ cdot {\ frac {1} {{\ sqrt {2 \ pi}} \ sigma _ {2}}} e ^ {- {\ frac {(x- \ mathbf {\ xi} - \ mu _ {2}) ^ {2}} {2 \ sigma _ {2} ^ {2}}}} \ mathbf {\ mathrm {d} \ xi}}$${\ displaystyle = {\ frac {1} {2 \ pi \ sigma _ {1} \ sigma _ {2}}} \ int \ limits _ {- \ infty} ^ {\ infty} e ^ {- {\ frac {\ mathbf {\ xi} ^ {2} + \ mu _ {1} ^ {2} -2 \ mathbf {\ xi} \ mu _ {1}} {2 \ sigma _ {1} ^ {2}} } - {\ frac {\ mathbf {\ xi} ^ {2} + (x- \ mu _ {2}) ^ {2} -2 \ mathbf {\ xi} (x- \ mu _ {2})} {2 \ sigma _ {2} ^ {2}}}} \ mathbf {\ mathrm {d} \ xi}}$

${\ displaystyle = {\ frac {1} {2 \ pi \ sigma _ {1} \ sigma _ {2}}} e ^ {- {\ frac {\ mu _ {1} ^ {2}} {2 \ sigma _ {1} ^ {2}}} - {\ frac {(x- \ mu _ {2}) ^ {2}} {2 \ sigma _ {2} ^ {2}}}} \ int \ limits _ {- \ infty} ^ {\ infty} e ^ {- {\ frac {\ mathbf {\ xi} ^ {2} \ sigma _ {2} ^ {2} -2 \ mathbf {\ xi} \ mu _ {1} \ sigma _ {2} ^ {2} + \ mathbf {\ xi} ^ {2} \ sigma _ {1} ^ {2} -2 \ mathbf {\ xi} (x- \ mu _ {2 }) \ sigma _ {1} ^ {2}} {2 \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2}}}} \ mathbf {\ mathrm {d} \ xi}}$${\ displaystyle = {\ frac {1} {2 \ pi \ sigma _ {1} \ sigma _ {2}}} e ^ {- {\ frac {\ mu _ {1} ^ {2} \ sigma _ { 2} ^ {2} + (x- \ mu _ {2}) ^ {2} \ sigma _ {1} ^ {2}} {2 \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2}}}} \ int \ limits _ {- \ infty} ^ {\ infty} e ^ {- {\ frac {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2 }} {2 \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2}}} \ left [\ left (\ mathbf {\ xi} - {\ frac {\ mu _ {1} \ sigma _ {2} ^ {2} + (x- \ mu _ {2}) \ sigma _ {1} ^ {2}} {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}} \ right) ^ {2} - \ left ({\ frac {\ mu _ {1} \ sigma _ {2} ^ {2} + (x- \ mu _ {2}) \ sigma _ {1} ^ {2}} {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}} \ right) ^ {2} \ right]} \ mathbf {\ mathrm {d } \ xi}}$${\ displaystyle = {\ frac {1} {2 \ pi \ sigma _ {1} \ sigma _ {2}}} e ^ {- {\ frac {\ mu _ {1} ^ {2} \ sigma _ { 2} ^ {2} + (x- \ mu _ {2}) ^ {2} \ sigma _ {1} ^ {2} - {\ frac {\ left (\ mu _ {1} \ sigma _ {2 } ^ {2} + (x- \ mu _ {2}) \ sigma _ {1} ^ {2} \ right) ^ {2}} {\ sigma _ {1} ^ {2} + \ sigma _ { 2} ^ {2}}}} {2 \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2}}}} \ int \ limits _ {- \ infty} ^ {\ infty} e ^ {- {\ frac {\ left (\ mathbf {\ xi} - {\ frac {\ mu _ {1} \ sigma _ {2} ^ {2} + (x- \ mu _ {2}) \ sigma _ {1} ^ {2}} {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}} \ right) ^ {2}} {2 {\ frac {\ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2}} {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}}}}} \ mathbf {\ mathrm {d} \ xi}}$${\ displaystyle = {\ frac {1} {2 \ pi \ sigma _ {1} \ sigma _ {2}}} e ^ {- {\ frac {{\ cancel {\ mu _ {1} ^ {2} \ sigma _ {2} ^ {4}}} + \ mu _ {1} ^ {2} \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2} + {{\ cancel {( x- \ mu _ {2}) ^ {2} \ sigma _ {1} ^ {4}}} + (x- \ mu _ {2}) ^ {2} \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2} - \ left ({\ cancel {\ mu _ {1} ^ {2} \ sigma _ {2} ^ {4}}} + {\ cancel {(x- \ mu _ {2}) ^ {2} \ sigma _ {1} ^ {4}}} + 2 \ mu _ {1} \ sigma _ {2} ^ {2} (x- \ mu _ {2}) \ sigma _ {1} ^ {2} \ right)}} {2 \ sigma _ {1} ^ {2} \ sigma _ {2} ^ {2} (\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2})}}} {\ sqrt {2 \ pi}} {\ frac {\ sigma _ {1} \ sigma _ {2}} {\ sqrt {\ sigma _ {1} ^ { 2} + \ sigma _ {2} ^ {2}}}}}$${\ displaystyle = {\ frac {1} {{\ sqrt {2 \ pi}} {\ sqrt {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}}}} e ^ {- {\ frac {\ mu _ {1} ^ {2} + (x- \ mu _ {2}) ^ {2} -2 \ mu _ {1} (x- \ mu _ {2}) } {2 (\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2})}}}}$

${\ displaystyle = {\ underline {\ underline {{\ frac {1} {{\ sqrt {2 \ pi}} {\ sqrt {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ { 2}}}}} e ^ {- {\ frac {\ left [x - (\ mu _ {1} + \ mu _ {2}) \ right] ^ {2}} {2 {\ sqrt {\ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}} ^ {2}}}}}}}}$

This is how the Gaussian error addition can be justified: Given are two bars with error-prone lengths and . If you want to know how long the assembled rod is, then you can consider the two rods as a randomly distributed ensemble. It can e.g. B. be that rod 1 is really long. This event occurs with a certain probability, which can be read from the normal distribution . For this event, the total length of the two bars is normally distributed, namely with the normal distribution of the 2nd bar multiplied by the probability that the 1st bar is long. If one goes through this for all member lengths for member 1 and adds the distributions of the composite member, then this corresponds to the integration given in the proof, which is equivalent to a convolution. The composite rod is therefore also normally distributed and long. ${\ displaystyle L_ {1} = \ left (1 \ pm 0 {,} 03 \ right) \, \ mathrm {m}}$${\ displaystyle L_ {2} = \ left (2 \ pm 0 {,} 04 \ right) \, \ mathrm {m}}$${\ displaystyle 1 {,} 01 \, \ mathrm {m}}$${\ displaystyle \ mu _ {1} = 1 \, \ mathrm {m}, \ sigma _ {1} = 0 {,} 03 \, \ mathrm {m}}$${\ displaystyle 1 {,} 01 \, \ mathrm {m}}$${\ displaystyle L = \ left (3 \ pm 0 {,} 05 \ right) \, \ mathrm {m}}$

## Properties of the fold

### Algebraic properties

The convolution of -functions together with the addition satisfies almost all axioms of a commutative ring, with the exception that this structure has no neutral element . One speaks jokingly of an "Rng" because the i for "identity" is missing. In detail, the following properties apply: ${\ displaystyle L ^ {1} (\ mathbb {R} ^ {n})}$

${\ displaystyle f \ ast g = g \ ast f}$
${\ displaystyle f \ ast (g \ ast h) = (f \ ast g) \ ast h}$
${\ displaystyle f \ ast (g + h) = (f \ ast g) + (f \ ast h)}$
• Associativity with scalar multiplication
${\ displaystyle a (f \ ast g) = (af) \ ast g = f \ ast (ag)}$
Where is any complex number .${\ displaystyle a}$

### Derivation rule

${\ displaystyle \ mathrm {D} (f \ ast g) = (\ mathrm {D} f) \ ast g = f \ ast \ mathrm {D} g}$

It is the distributional derivative of . If (totally) differentiable, the distributional derivation and (total) derivation agree. Two interesting examples are: ${\ displaystyle \ mathrm {D} f}$${\ displaystyle f}$${\ displaystyle f}$

• ${\ displaystyle \ mathrm {D} (f \ ast \ delta) (x) = (f \ ast D \ delta) (x) = Df (x)}$, where is the derivative of the delta distribution . The derivation can therefore be understood as a convolution operator.${\ displaystyle D \ delta}$
• ${\ displaystyle (f * \ Theta) (x) = \ int _ {- \ infty} ^ {x} f (t) \ mathrm {d} t}$, where is the step function, gives an antiderivative for .${\ displaystyle \ Theta}$${\ displaystyle f}$

### integration

If and are integrable functions, then applies ${\ displaystyle f}$${\ displaystyle g}$

${\ displaystyle \ int _ {\ mathbb {R} ^ {n}} (f * g) (x) dx = \ left (\ int _ {\ mathbb {R} ^ {n}} f (x) dx \ right) \ left (\ int _ {\ mathbb {R} ^ {n}} g (x) dx \ right).}$

This is a simple consequence of Fubini's theorem .

### Convolution theorem

Using the Fourier transform

${\ displaystyle {\ mathcal {F}} (f) (t) = {\ frac {1} {\ left (2 \ pi \ right) ^ {\ frac {n} {2}}}} \ int _ { \ mathbb {R} ^ {n}} f (x) \, e ^ {- \ mathrm {i} t \ cdot x} \, \ mathrm {d} x, \ quad f \ in L ^ {1} ( \ mathbb {R} ^ {n})}$

one can express the convolution of two functions as the product of their Fourier transforms:

${\ displaystyle {\ mathcal {F}} (f * g) = (2 \ pi) ^ {\ tfrac {n} {2}} \, {\ mathcal {F}} (f) \ cdot {\ mathcal { F}} (g), \ quad f, g \ in L ^ {1} (\ mathbb {R} ^ {n})}$

A similar theorem also applies to the Laplace transformation . The reverse of the convolution theorem says:

${\ displaystyle {\ mathcal {F}} (f) * {\ mathcal {F}} (g) = (2 \ pi) ^ {\ tfrac {n} {2}} {\ mathcal {F}} (f \ cdot g)}$

It is the point by point product of the two functions is thus equivalent to at any point . ${\ displaystyle \ cdot}$${\ displaystyle f = g \ cdot h}$${\ displaystyle f (x) = g (x) \ cdot h (x)}$${\ displaystyle x}$

### Reflection operator

Let it be the reflection operator with for all , then applies ${\ displaystyle S}$${\ displaystyle Sf (t) = f (-t)}$${\ displaystyle t}$

• ${\ displaystyle (Sf * g) (t) = \ int _ {\ mathbb {R} ^ {n}} f (- \ tau) g (t- \ tau) \ mathrm {d} \ tau = \ int _ {\ mathbb {R} ^ {n}} f (\ tau) g (\ tau + t) \ mathrm {d} \ tau = S (f * Sg) (t)}$ and
• ${\ displaystyle (Sf * Sg) (t) = \ int _ {\ mathbb {R} ^ {n}} f (\ tau) g (-t- \ tau) \ mathrm {d} \ tau = S (f * g) (t).}$

### Convolution of dual L p functions is continuous

Be and with and . Then the convolution is a bounded continuous function on . If , the convolution disappears at infinity, so it is a function . This statement is also correct if is a real Hardy function and lies in BMO . ${\ displaystyle f \ in L ^ {p} (\ mathbb {R} ^ {n})}$${\ displaystyle g \ in L ^ {q} (\ mathbb {R} ^ {n})}$${\ displaystyle 1 \ leq p, q \ leq \ infty}$${\ displaystyle {\ tfrac {1} {p}} + {\ tfrac {1} {q}} = 1}$${\ displaystyle f * g}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle p \ neq \ infty \ neq q}$${\ displaystyle C_ {0}}$${\ displaystyle f}$${\ displaystyle g}$

### Generalized Young's inequality

The generalized Young 's inequality follows from Hölder's inequality

${\ displaystyle \ | f * g \ | _ {L ^ {r}} \ leq \ | f \ | _ {L ^ {p}} \ | g \ | _ {L ^ {q}}}$

for and . ${\ displaystyle {\ tfrac {1} {p}} + {\ tfrac {1} {q}} = 1 + {\ tfrac {1} {r}}}$${\ displaystyle p, q, r \ geq 1}$

## Convolution as an integral operator

Let , then the convolution can also be understood as an integral operator with the integral kernel. That is, one can define the convolution as an operator${\ displaystyle h \ in L ^ {2} ([0.2 \ pi])}$ ${\ displaystyle h}$${\ displaystyle T_ {h} \ colon L ^ {2} ([0.2 \ pi]) \ to L ^ {2} ([0.2 \ pi])}$

${\ displaystyle T_ {h} f (s): = {\ frac {1} {2 \ pi}} \ int _ {[0,2 \ pi]} f (t) h (st) \ mathrm {d} t}$

grasp. This is a linear and compact operator that is also normal . Its adjoint operator is given by

${\ displaystyle T_ {h} ^ {*} f (s) = {\ frac {1} {2 \ pi}} \ int _ {[0,2 \ pi]} f (t) {\ overline {h ( ts)}} \ mathrm {d} t \ ,.}$

Also is a Hilbert-Schmidt operator . ${\ displaystyle T_ {h}}$

## Discrete folding

In digital signal processing and digital image processing , one usually has to deal with discrete functions that need to be folded together. In this case a sum takes the place of the integral and one speaks of the time-discrete convolution.

### definition

Let be functions with the discrete domain . Then the discrete convolution is defined by ${\ displaystyle f, g \ colon D \ to \ mathbb {C}}$${\ displaystyle D \ subseteq \ mathbb {Z}}$

${\ displaystyle (f * g) (n) = \ sum _ {k \ in D} f (k) g (nk)}$.

The summation range is the entire definition range of both functions. In the case of a limited domain, and are usually continued with zeros. ${\ displaystyle D}$${\ displaystyle f}$${\ displaystyle g}$

If the domain is finite, the two functions can also be understood as vectors , respectively . The convolution is then given as a matrix-vector product : ${\ displaystyle {\ vec {f}} \ in \ mathbb {C} ^ {n_ {f}}}$${\ displaystyle {\ vec {g}} \ in \ mathbb {C} ^ {n_ {g}}}$

${\ displaystyle (f * g) (n) = \ mathbf {G} {\ vec {f}}}$

with the matrix

${\ displaystyle \ mathbf {G} = {\ begin {bmatrix} {\ vec {g}} & 0 & 0 & \ cdots & 0 \\ 0 & {\ vec {g}} & 0 & \ cdots & 0 \\\ vdots & 0 & {\ vec {g }} & \ cdots & 0 \\\ vdots & \ vdots & \ vdots & \ ddots & \ vdots \\ 0 & 0 & 0 & \ cdots & {\ vec {g}} \ end {bmatrix}}}$

with and${\ displaystyle \ mathbf {G} \ in m \ times n_ {f}}$${\ displaystyle m = n_ {f} + n_ {g} -1.}$

If you continue the columns from below and above the periodically, instead of adding zeros , you get a cyclic matrix , and you get the cyclic convolution . ${\ displaystyle \ mathbf {G}}$${\ displaystyle {\ vec {g}}}$${\ displaystyle \ mathbf {G}}$

### Applications

The product of two polynomials and is, for example, the discrete convolution of their coefficient sequences continued with zeros. The infinite series that occur thereby always only have a finite number of summands that are not equal to zero. The product of two formal Laurent series with a finite main part is defined analogously . ${\ displaystyle f}$${\ displaystyle g}$

An algorithm for calculating the discrete convolution that is efficient in terms of computing power is the fast convolution , which in turn is based on the fast Fourier transform (FFT) for the efficient computation of the discrete Fourier transform.

## Distributions

The convolution was extended to distributions by Laurent Schwartz , who is considered the founder of distribution theory .

### Folding with one function

Another generalization is the convolution of a distribution with a function . This is defined by ${\ displaystyle T}$${\ displaystyle \ varphi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

${\ displaystyle (T * \ varphi) (x): = T (\ tau _ {x} \ varphi) = T (\ varphi (x- \ cdot)),}$

where is a translation and reflection operator defined by . ${\ displaystyle \ tau _ {x}}$${\ displaystyle \ tau _ {x} \ phi (y) = \ phi (xy)}$

### Convolution of two distributions

Be and two distributions, one having a compact carrier. Then for all the convolution between these distributions is defined by ${\ displaystyle u_ {1}}$${\ displaystyle u_ {2}}$${\ displaystyle \ varphi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

${\ displaystyle (u_ {1} * u_ {2}) * \ varphi = u_ {1} * (u_ {2} * \ varphi)}$.

A further statement ensures that there is a clear distribution with ${\ displaystyle u \ in {\ mathcal {D}} '}$

${\ displaystyle u_ {1} * (u_ {2} * \ varphi) = u * \ varphi}$

for everyone . ${\ displaystyle \ varphi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

### Algebraic properties

Be , and distributions, then applies ${\ displaystyle u_ {1}}$${\ displaystyle u_ {2}}$${\ displaystyle u_ {3}}$

${\ displaystyle u_ {1} * u_ {2} = u_ {2} * u_ {1} \,}$
${\ displaystyle u_ {1} * (u_ {2} + u_ {3}) = (u_ {1} * u_ {2}) + (u_ {1} * u_ {3}) \,}$
• Associativity with scalar multiplication
${\ displaystyle a (u_ {1} * u_ {2}) = (au_ {1}) * u_ {2} = u_ {1} * (au_ {2}) \,}$
Where is any complex number .${\ displaystyle a}$
• Neutral element
• ${\ displaystyle (u_ {1} * \ delta) = u_ {1} \,}$, where is the delta distribution .${\ displaystyle \ delta}$

### Convolution theorem

With is the Fourier transform described by distributions. Now be a tempered distribution and a distribution with a compact carrier. Then it is and it applies ${\ displaystyle {\ mathcal {F}}}$${\ displaystyle u_ {1} \ in S '(\ mathbb {R} ^ {n})}$${\ displaystyle u_ {2} \ in {\ mathcal {E '}} (\ mathbb {R} ^ {n})}$${\ displaystyle u_ {1} * u_ {2} \ in S '(\ mathbb {R} ^ {n})}$

${\ displaystyle {\ mathcal {F}} (u_ {1} * u_ {2}) = (2 \ pi) ^ {\ tfrac {n} {2}} {\ mathcal {F}} (u_ {1} ) \ cdot {\ mathcal {F}} (u_ {2})}$.

## Topological groups

### Convolution on topological groups

The two convolution terms can be described and generalized together by a general convolution term for complex-valued m -integratable functions on a suitable topological group G with a measure m (e.g. a locally compact Hausdorff topological group with a Haar measure ):

${\ displaystyle (f * g) (x) = \ int _ {G} f (t) g (xt ^ {- 1}) \ mathrm {d} m (t) \,}$

This concept of convolution plays a central role in the representation theory of these groups, the most important representatives of which are the Lie groups . The algebra of the integrable functions with the convolution product is for compact groups the analogue of the group ring of a finite group. Further topics are:

### The folding algebra of finite groups

For a finite group with the amount of addition and scalar multiplication is a vector space, isomorphic to The folding is then an algebra , called the convolution algebra . Convolutional algebra has a base indexed with the group elements where ${\ displaystyle G}$${\ displaystyle g: = {\ text {ord}} (G),}$${\ displaystyle L ^ {1} (G): = \ {f \ colon G \ to \ mathbb {C} \}}$${\ displaystyle \ mathbb {C}}$${\ displaystyle \ textstyle \ mathbb {C} ^ {g}.}$${\ displaystyle \ textstyle f * h (s) = \ sum _ {t \ in G} f (t) h (t ^ {- 1} s)}$${\ displaystyle \ textstyle L ^ {1} (G)}$
${\ displaystyle (\ delta _ {s}) _ {s \ in G},}$

${\ displaystyle \ delta _ {s} (t) = {\ begin {cases} 1 \, \, \, \, \, {\ text {if}} \, \, \, t = s \\ 0 \ , \, \, \, \, {\ text {otherwise}} \ end {cases}}.}$

The following applies to the convolution: We define a mapping between and by defining for base elements: and continue linearly. This mapping is obviously bijective . It is seen from the above equation for the convolution of two basic elements of the multiplication in the in equivalent. Thus the convolutionalgebra and the group algebra are isomorphic as algebras. ${\ displaystyle \ delta _ {s} * \ delta _ {t} = \ delta _ {st}.}$
${\ displaystyle L ^ {1} (G)}$${\ displaystyle \ mathbb {C} [G],}$${\ displaystyle \ delta _ {s} \ mapsto e_ {s}}$${\ displaystyle L ^ {1} (G),}$${\ displaystyle L ^ {1} (G)}$${\ displaystyle \ mathbb {C} [G]}$

With the involution it becomes an algebra . It is an illustration of a group goes to a -Algebrenhomomorphismus by Da as -Algebrenhomomorphismus particular multiplicative, we obtain If unitary is furthermore apply the definition of a unitary account is given in Chapter properties . There it is also shown that we can assume a linear representation to be unitary without restriction. ${\ displaystyle \ textstyle f ^ {*} (s) = {\ overline {f (s ^ {- 1})}}}$${\ displaystyle \ textstyle L ^ {1} (G)}$${\ displaystyle ^ {*}}$${\ displaystyle \ delta _ {s} ^ {*} = \ delta _ {s ^ {- 1}}.}$
${\ displaystyle (\ pi, V _ {\ pi})}$${\ displaystyle G}$${\ displaystyle ^ {*}}$${\ displaystyle \ pi \ colon L ^ {1} (G) \ to {\ text {End}} (V _ {\ pi})}$${\ displaystyle \ pi (\ delta _ {s}) = \ pi (s).}$
${\ displaystyle \ pi}$${\ displaystyle ^ {*}}$${\ displaystyle \ pi (f * h) = \ pi (f) \ pi (h).}$${\ displaystyle \ pi}$${\ displaystyle \ pi (f) ^ {*} = \ pi (f ^ {*}).}$

Within the framework of convolution algebra, a Fourier transformation can be carried out on groups . In the harmonic analysis it is shown that this definition is consistent with the definition of the Fourier transform . If a representation, then the Fourier transform is defined by the formula ${\ displaystyle \ mathbb {R}}$
${\ displaystyle \ rho \ colon G \ to {\ text {GL}} (V _ {\ rho})}$${\ displaystyle f \ in L ^ {1} (G),}$ ${\ displaystyle {\ hat {f}} (\ rho) \ in {\ text {End}} (V _ {\ rho})}$

${\ displaystyle {\ hat {f}} (\ rho) = \ sum _ {s \ in G} f (s) \ rho (s).}$

It then applies ${\ displaystyle {\ widehat {f * g}} (\ rho) = {\ hat {f}} (\ rho) \ cdot {\ hat {g}} (\ rho).}$

## application

• In optics , a wide variety of image disturbances can be modeled as a convolution of the original image with a corresponding core. In digital image processing , convolution is therefore used to simulate such effects. Other digital effects are also based on the convolution. When determining the direction of image edges , 3 × 3 and 5 × 5 folds are essential.
• In the case of a linear, time-invariant transmission element , the response to an excitation is obtained by convolution of the excitation function with the impulse response of the transmission element. For example, the linear filtering of an electronic signal represents the convolution of the original function with the impulse response.
• Convolution is used to construct special solutions of certain partial differential equations . Is the fundamental solution of the partial differential operator , then a solution of partial differential equation .${\ displaystyle G}$ ${\ displaystyle L}$${\ displaystyle G * f}$${\ displaystyle Lu = f}$
• Diffusion processes can be described by folding.
• If and are two stochastically independent random variables with probability densities and , then the density of the sum is equal to the convolution .${\ displaystyle X}$${\ displaystyle Y}$ ${\ displaystyle f}$${\ displaystyle g}$ ${\ displaystyle X + Y}$${\ displaystyle f * g}$
• In acoustics (music), convolution (with the aid of FFT = fast Fourier transform ) is also used to digitally generate reverbs and echoes and to adapt sound properties. To do this, the impulse response of the room whose sound characteristics you want to adopt is folded with the signal you want to influence.
• In engineering mathematics and signal processing, input signals (external influences) are convoluted with the impulse response (reaction of the system under consideration to a Dirac impulse as a signal input, also weight function) in order to calculate the response of an LTI system to any input signals. The impulse response should not be confused with the step response . The former describes the entirety of the system and a Dirac pulse as an input test function, the latter the entirety of the system and a step function as an input test function. The calculations usually do not take place in the time domain, but in the frequency domain. For this purpose, both the signal and the impulse response describing the system behavior must be present in the frequency domain or, if necessary, transformed from the time domain by Fourier transformation or one-sided Laplace transformation. The corresponding spectral function of the impulse response is called the frequency response or transfer function.
• In numerical mathematics , the B- spline basis function for the vector space of piecewise polynomials of degree k is obtained by convolution of the box function with .${\ displaystyle N ^ {0} (t)}$${\ displaystyle N ^ {k-1} (t)}$${\ displaystyle N ^ {k} (t)}$
• In computer algebra , the convolution can be used for an efficient calculation of the multiplication of multi-digit numbers, since the multiplication essentially represents a convolution with a subsequent carry. The complexity of this procedure is almost linear, while the “school procedure” has a quadratic effort , where the number of positions is. This is worthwhile despite the additional effort required for the Fourier transformation (and its inversion).${\ displaystyle {\ mathcal {O}} (n \ cdot \ log {(n)})}$${\ displaystyle {\ mathcal {O}} (n ^ {2})}$${\ displaystyle n}$
• In hydrology , convolution is used to calculate the runoff produced by a precipitation runoff event in a catchment area with a given amount and duration of precipitation. For this purpose, the so-called "unit hydrograph" (unit discharge hydrograph ) - the discharge hydrograph on a unit precipitation of a given duration - is folded with the temporal function of the precipitation.
• In reflection seismics , a seismic trace is viewed as a convolution of impedance contrasts of the geological layer boundaries and the output signal (wavelet). The process of restoring the undistorted layer boundaries in the seismogram is deconvolution .

## References and footnotes

1. More generally it can also be assumed for a and . See Herbert Amann, Joachim Escher: Analysis III . 1st edition. Birkhäuser-Verlag, Basel / Boston / Berlin 2001, ISBN 3-7643-6613-3 , section 7.1.${\ displaystyle f \ in {\ mathcal {L}} ^ {p} (\ mathbb {R} ^ {n})}$${\ displaystyle p \ in [1; \ infty]}$${\ displaystyle g \ in {\ mathcal {L}} ^ {1} (\ mathbb {R} ^ {n})}$
2. ^ Proof by inserting the inverse Fourier transform. E.g. as in Fourier transform for pedestrians, Tilman Butz, Edition 7, Springer DE, 2011, ISBN 978-3-8348-8295-0 , p. 53, Google Books
3. ( page no longer available , search in web archives: dt.e-technik.uni-dortmund.de ) (PDF).
4. Dirk Werner : Functional Analysis . 6th, corrected edition, Springer-Verlag, Berlin 2007, ISBN 978-3-540-72533-6 , p. 447.