Distribution (math)

In the field of mathematics, a distribution describes a special type of functional , i.e. an object from functional analysis .

The theory of distributions makes it possible to define a type of solution for differential equations which in the classical sense are not sufficiently often differentiable or not defined at all (see distributional solution ). In this sense, distributions can be viewed as a generalization of the notion of function . There are partial differential equations that have no classical solutions , but solutions in the distributional sense. The theory of distributions is therefore particularly important in physics and engineering : many of the problems examined there lead to differential equations that could only be solved with the help of the theory of distributions.

The mathematician Laurent Schwartz was instrumental in studying the theory of distributions. In 1950 he published the first systematic approach to this theory. He received the Fields Medal for his work on the distributions .

History of Distribution Theory

In 1903 Jacques Hadamard introduced the concept of the functional, which is central to distribution theory . From today's perspective, a functional is a function that assigns a number to other functions. Hadamard was able to show that every continuous , linear functional as the limit value of a sequence of integrals ${\ displaystyle T}$

${\ displaystyle T (f) = \ lim _ {n \ to \ infty} \ int f (t) g_ {n} (t) \ mathrm {d} t}$
Paul Dirac, 1933

can be represented. In general, limit value and integral must not be interchanged in this representation. In 1910 it could be shown that every continuous, linear functional on , the space of p-integrable functions , as ${\ displaystyle L ^ {p}}$

${\ displaystyle T (f) = \ int f (x) g (x) \ mathrm {d} x}$

can be displayed with and . With this formulation, no limit value has to be established and is clearly defined. That is why the functional is often identified with the "function" . Then has two different meanings: On the one hand, one understands as - "function" , on the other hand it is equated with the functional . ${\ displaystyle g \ in L ^ {q}}$${\ displaystyle {\ tfrac {1} {p}} + {\ tfrac {1} {q}} = 1}$${\ displaystyle g}$${\ displaystyle T}$${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle L ^ {q}}$${\ displaystyle T}$

The first dealt Paul Dirac in the 1920s for research in quantum mechanics with distributions. He introduced the important delta distribution . However, he did not yet use a mathematically precise definition for this distribution. In his investigations, he disregarded the functional analysis of the time, i.e. the theory of the functional. In the 1930s Sergei Lwowitsch Sobolew dealt with initial value problems in partial hyperbolic differential equations . For these investigations he introduced the Sobolew rooms, which are named after him today . In 1936, Sobolev was studying hyperbolic second order differential equations with analytic coefficient functions . In order to be able to specify a more handy criterion for the existence of a solution to this partial differential equation, Sobolew extended the question to the space of the functional. He was the first to formulate today's definition of a distribution. However, he did not yet develop a comprehensive theory from his definitions, but only used it as an aid to investigating partial differential equations.

Laurent Schwartz, 1970

Finally Laurent Schwartz developed the theory of distributions in the winter of 1944/45. At this point in time Sobolev's work was still unknown to him, but just like Sobolew, through questions in the area of ​​partial differential equations, he too came across special functionals, which he now called distributions. From then on, the theory was further developed so quickly that Schwartz was able to give lectures on it in Paris in the winter of 1945/46. Electrical engineers attending his lectures urged him to further develop his theory in the direction of Fourier and Laplace transforms . In 1947 Schwartz had defined the space of tempered distributions and thus integrated the Fourier transformations into his theory. In 1950/51 his monograph Theory of Distribution appeared, which further strengthened his theory. As early as 1950 he received the Fields Medal , one of the highest awards in the field of mathematics, for his research in the field of distributions .

From then on, the theory of distributions was further developed in theoretical physics and in the theory of partial differential equations . Distribution theory is useful for describing singular objects in physics such as the electromagnetic point charge or the point mass with mathematical precision. These two physical objects can be suitably described with the help of the delta distribution, because the spatial density function of a mass point with unit mass is required to vanish everywhere except at one point. There it has to become infinite, since the space integral over the density function should result in 1 (unit mass). There is no function in the usual sense that fulfills these requirements. In the theory of partial differential equations and Fourier analysis , distributions are important, since with this concept formation every locally integrable function can be assigned a derivative.

Definitions

distribution

Be an open, non-empty set. A distribution is a continuous and linear functional in the space of the test functions . ${\ displaystyle \ Omega \ subset \ mathbb {R} ^ {n}}$${\ displaystyle T}$ ${\ displaystyle {\ mathcal {D}} (\ Omega)}$

In other words, a picture or so for all and true ${\ displaystyle T \ colon {\ mathcal {D}} (\ Omega) \ to \ mathbb {C}}$${\ displaystyle T \ colon {\ mathcal {D}} (\ Omega) \ to \ mathbb {R}}$${\ displaystyle \ phi _ {1}, \ phi _ {2} \ in {\ mathcal {D}} (\ Omega)}$${\ displaystyle \ lambda \ in \ mathbb {C}}$

${\ displaystyle T (\ phi _ {1} + \ lambda \ phi _ {2}) = T (\ phi _ {1}) + \ lambda T (\ phi _ {2})}$

and

${\ displaystyle T (\ phi _ {n}) \ to T (\ phi)}$

whenever in . ${\ displaystyle \ phi _ {n} \ to \ phi}$${\ displaystyle {\ mathcal {D}} (\ Omega)}$

Room of the distributions

The set of distributions with the corresponding links of addition and scalar multiplication is thus the topological dual space to the test function space and is therefore noted as. In functional analysis, the symbol denotes the topological dual space. In order to be able to speak of continuity and topological dual space at all, the space of the test functions must be equipped with a locally convex topology. ${\ displaystyle {\ mathcal {D}}}$${\ displaystyle {\ mathcal {D}} '}$${\ displaystyle '}$

The following characterization is therefore often used as an alternative definition, since it does without the topology of the test function space and no knowledge of locally convex spaces is required:

Be an open crowd. A linear functional is called distribution if for every compact there is one and one , so that the inequality for all test functions${\ displaystyle \ Omega \ subset \ mathbb {R} ^ {n}}$${\ displaystyle T \ colon {\ mathcal {D}} (\ Omega) \ to \ mathbb {C}}$${\ displaystyle K \ subset \ Omega}$${\ displaystyle C> 0}$${\ displaystyle k \ in \ mathbb {N} _ {0}}$${\ displaystyle \ phi \ in {\ mathcal {D}} (K)}$

${\ displaystyle | T (\ phi) | \ leq C \ | \ phi \ | _ {C_ {b} ^ {k} (K)}: = C \ sum _ {| \ alpha | \ leq k} ^ { } \ sup _ {x \ in K} \ left | \ partial ^ {\ alpha} \ phi (x) \ right |}$

applies. This definition is equivalent to the one given before, because the continuity of the functional follows from this inequality, although it does not have to apply to the whole , because the (LF) -space is bornological . ${\ displaystyle T}$${\ displaystyle \ Omega}$${\ displaystyle {\ mathcal {D}} (\ Omega)}$

Order of a distribution

If the same number can be chosen for all compacts in the alternative definition above , the smallest possible number is called the order of . The set of distributions of the order is denoted by and the set of all distributions with finite order is noted with. This space is smaller than the general distribution space , because there are also distributions that are not of finite order. ${\ displaystyle K}$${\ displaystyle k}$${\ displaystyle k}$${\ displaystyle T}$${\ displaystyle k}$${\ displaystyle {\ mathcal {D}} '^ {k} (\ Omega)}$${\ displaystyle \ textstyle {\ mathcal {D}} '_ {F} (\ Omega): = \ bigcup _ {k} {\ mathcal {D}}' ^ {k} (\ Omega)}$${\ displaystyle {\ mathcal {D}} '(\ Omega)}$

Regular distribution

The regular distributions are a special subset of the distributions. These distributions are generated by a locally integrable function . Precisely this means that a distribution is called regular if there is a representation ${\ displaystyle f \ in L _ {\ mathrm {loc}} ^ {1} (\ mathbb {R} ^ {n})}$${\ displaystyle T}$

${\ displaystyle T_ {f} (\ phi) = \ int _ {\ mathbb {R} ^ {n}} f (t) \ phi (t) \ mathrm {d} t}$

where there is a locally integratable function. Non-regular distributions are also called singular ; these are distributions for which there is no generating function in the sense of this definition. ${\ displaystyle f \ in L _ {\ mathrm {loc}} ^ {1} (\ mathbb {R} ^ {n})}$${\ displaystyle f}$

This integral representation of a regular distribution, together with the scalar product im, motivates the alternative notation ${\ displaystyle \ mathbb {R} ^ {n}}$

${\ displaystyle (T, \ phi): = T (\ phi)}$

for all (not just regular) distributions.

Test functions

In the definition of the distribution, the concept of the test function or the test function space is central. This test function space is the space of smooth functions with compact support together with an induced topology . Choosing a topology on the test function space is very important because otherwise the concept of continuity cannot be meaningfully defined. The topology is defined in space by a concept of convergence.

Let be an open subset , then denoted ${\ displaystyle \ Omega \ subset \ mathbb {R} ^ {n}}$

${\ displaystyle C_ {c} ^ {\ infty} (\ Omega) = \ {\ phi \ in C ^ {\ infty} (\ Omega) \ mid \ operatorname {supp} \, (\ phi) \ mathrm {~ is ~ compact ~ subset ~ of ~} \ Omega \}}$

the set of all infinitely often differentiable functions that have a compact support , i.e. that are zero outside of a compact set. The concept of convergence is defined by defining: A sequence with converges against , if there is a compact word with for all and ${\ displaystyle (\ phi _ {j}) _ {j \ in \ mathbb {N}}}$${\ displaystyle \ phi _ {j} \ in C_ {c} ^ {\ infty} (\ Omega)}$${\ displaystyle {\ phi}}$${\ displaystyle K \ subset \ Omega}$${\ displaystyle \ operatorname {supp} (\ phi _ {j}) \ subset K}$${\ displaystyle j}$

${\ displaystyle \ lim _ {j \ rightarrow \ infty} \ sup _ {x \ in K} \ left | {\ frac {\ partial ^ {\ alpha}} {\ partial x ^ {\ alpha}}} \ left (\ phi _ {j} (x) - \ phi (x) \ right) \ right | = 0}$

for all multi-indices . The set is - endowed with this concept of convergence - a locally convex space , which one calls the space of the test functions and is noted as. ${\ displaystyle \ alpha \ in \ mathbb {N} ^ {n}}$${\ displaystyle C_ {c} ^ {\ infty} (\ Omega)}$${\ displaystyle {\ mathcal {D}} (\ Omega)}$

Two different points of view

As described above in the section on the definition of the distribution, a distribution is a functional , i.e. a function with certain additional properties. In the section History of Distribution Theory, however, it was said that the delta distribution cannot be a function. This is obviously a contradiction that can still be found in current literature. This contradiction arises from the attempt to identify distributions - and also functionals tidy up - with real-valued functions. ${\ displaystyle L ^ {p}}$

In theoretical physics in particular, a distribution is understood to be an object, named for example , with certain properties resulting from the context. The desired properties often prevent a function from being able to exist, which is why one speaks of a generalized function. Now that the properties of have been determined, consider the assignment ${\ displaystyle \ delta}$${\ displaystyle \ delta}$${\ displaystyle \ delta}$

${\ displaystyle C_ {c} ^ {\ infty} (\ Omega) \ ni \ phi \ mapsto \ int \ delta (x) \ phi (x) \ mathrm {d} x \ in \ mathbb {R},}$

which assigns a real number to a test function . However, since there is generally no function, a meaning must first be explained for the expression on a case-by-case basis. ${\ displaystyle {\ phi}}$${\ displaystyle \ delta}$

From a mathematical point of view, a distribution is a function with certain abstract properties (linearity and continuity) that assigns a real number to a test function. If this is an integrable function from the previous paragraph, the expression is mathematically precisely defined. However, the function is not referred to as distribution here , but the functional is called distribution. ${\ displaystyle \ delta}$${\ displaystyle \ textstyle T (\ phi) = \ int \ delta (x) \ phi (x) \ mathrm {d} x}$${\ displaystyle \ delta}$${\ displaystyle \ textstyle \ int \ delta (x) \ cdot \ mathrm {d} x}$

Also, many mathematics textbooks do not differentiate between the (distribution) generating function and the actual distribution in the mathematical sense. For the most part, this article uses the more rigorous mathematical perspective. ${\ displaystyle \ delta}$

Examples

Continuous function as a producer

Be and , so is through ${\ displaystyle \ Omega \ subseteq \ mathbb {R}}$${\ displaystyle f \ in C (\ Omega)}$

${\ displaystyle T (\ phi): = \ int _ {- \ infty} ^ {\ infty} f (x) \ phi (x) \ mathrm {d} x}$

one distribution defined for all . ${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (\ Omega)}$${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$

Delta distribution

The delta distribution is approximated by the sequence of functions . For all of them , the area under the function remains equal to one.${\ displaystyle \ textstyle \ delta _ {a} (x) = {\ frac {1} {\ sqrt {2 \ pi a}}} \ cdot e ^ {- {\ frac {x ^ {2}} {2a }}}}$${\ displaystyle a}$

The delta distribution is a singular distribution. That is, it cannot be generated by an ordinary function, although it is often written like one. The following applies: ${\ displaystyle \ delta}$

${\ displaystyle \ delta (\ phi): = \ phi (0).}$

That is, the delta distribution applied to a test function gives the value of the test function at the position 0. Just like any other distribution, the delta distribution can also be expressed as a sequence of integral terms. The Dirac episode${\ displaystyle {\ phi}}$

${\ displaystyle \ delta _ {a} (x) = {\ frac {1} {\ sqrt {2 \ pi a}}} \ cdot e ^ {- {\ frac {x ^ {2}} {2a}} }}$

has the limit value (compare e.g. the adjacent animation)

${\ displaystyle {\ mathrm {Gw}} (x): = \ lim _ {a \ to 0} \ delta _ {a} (x) = {\ begin {cases} 0 & x \ neq 0 \\\ infty & x = 0 \ end {cases}} \ ,,}$

which would lead to the vanishing integral . Because the behavior in only one point is irrelevant for integrals of ordinary functions. ${\ displaystyle \ textstyle \ int _ {\ mathbb {R}} {\ rm {Gw}} (x) {\ rm {d}} x = 0}$

With this Dirac sequence, however, the delta distribution can be carried out with a different limit value formation, before the integral and not after it

${\ displaystyle \ delta (\ phi) = \ lim _ {a \ to 0} \ int _ {\ mathbb {R}} \ delta _ {a} (x) \ phi (x) \ mathrm {d} x = \ lim _ {a \ to 0} \ int _ {\ mathbb {R}} {\ tfrac {1} {\ sqrt {2 \ pi a}}} \ cdot e ^ {- {\ frac {x ^ {2 }} {2a}}} \ phi (x) \ mathrm {d} x = \ phi (0)}$

represent. Mostly, however, the symbolic spelling is used, which leads to mathematically imprecise interpretation

${\ displaystyle \ delta (\ phi) = \ int _ {\ mathbb {R}} \ delta (x) \ phi (x) \ mathrm {d} x = \ phi (0)}$

used for the delta distribution, calling the term a generalized function and often omitting the word generalized . ${\ displaystyle \ delta (x)}$

Dirac comb

Dirac comb

The Dirac comb with is a periodic distribution that is closely related to the Dirac delta distribution. This distribution is defined for everyone as ${\ displaystyle \ Delta _ {T}}$${\ displaystyle T \ in \ mathbb {R}}$${\ displaystyle \ phi \ in {\ mathcal {D}} (\ mathbb {R})}$

${\ displaystyle \ Delta _ {T} (\ phi): = \ sum _ {n \ in \ mathbb {Z}} \ phi (nT).}$

This series converges because the test function has compact support and therefore only a finite number of summands are not equal to zero. An equivalent definition is ${\ displaystyle {\ phi}}$

${\ displaystyle \ Delta _ {T} = \ sum _ {n \ in \ mathbb {Z}} \ delta _ {nT},}$

where the equal sign is to be understood as equality between distributions. The row on the right then converges on the weak - * topology . The convergence of distributions is discussed in more detail in the section Convergence . The one appearing in the definition is a real number called the period of the Dirac ridge. The Dirac ridge is clearly composed of an infinite number of delta distributions that are at a distance from one another. In contrast to the Delta distribution, the Dirac comb does not have a compact carrier. What this means exactly is explained in the Compact Carrier section below. ${\ displaystyle T}$${\ displaystyle T}$

With is the amount of all radon measures. Be Well you can by ${\ displaystyle M (\ Omega)}$${\ displaystyle \ mu \ in M ​​(\ Omega).}$

${\ displaystyle \ mu \ mapsto \ left (\ phi \ in {\ mathcal {D}} (\ Omega) \ mapsto \ int _ {\ Omega} \ phi (x) \ mathrm {d} \ mu (x) \ right)}$

assign a distribution to each . In this way you can steadily in embedding . An example of a radon measure is the Dirac measure . For everyone it is defined by ${\ displaystyle \ mu}$${\ displaystyle M (\ Omega)}$ ${\ displaystyle {\ mathcal {D}} '(\ Omega)}$ ${\ displaystyle \ delta}$${\ displaystyle A \ subset \ Omega}$

${\ displaystyle \ delta (A): = {\ begin {cases} 1 \, & {\ text {if}} 0 \ in A \, \\ 0 \, & \ mathrm {otherwise} \. \ end {cases }}}$

One identifies the Dirac measure with the generating distribution

${\ displaystyle \ phi \ in {\ mathcal {D}} (\ Omega) \ mapsto \ int _ {\ Omega} \ phi (x) \ mathrm {d} \ delta (x) = \ phi (0),}$

so one gets the delta distribution, if holds. ${\ displaystyle 0 \ in \ Omega \ subset \ mathbb {R} ^ {n}}$

Cauchy's principal value of 1 / x

The function ${\ displaystyle x \ mapsto 1 / x}$

The main Cauchy value of the function can also be viewed as a distribution . You bet for everyone${\ displaystyle \ textstyle x \ mapsto {\ frac {1} {x}}}$${\ displaystyle T \ in {\ mathcal {D}} '(\ mathbb {R})}$${\ displaystyle \ phi \ in {\ mathcal {D}} (\ mathbb {R})}$

{\ displaystyle {\ begin {aligned} T (\ phi) &: = {\ text {PV}} \ int _ {- \ infty} ^ {\ infty} {\ frac {\ phi (x)} {x} } \ mathrm {d} x \\ &: = \ lim _ {\ varepsilon \ to 0} \ left (\ int _ {- \ infty} ^ {- \ varepsilon} {\ frac {\ phi (x)} { x}} \ mathrm {d} x + \ int _ {\ varepsilon} ^ {\ infty} {\ frac {\ phi (x)} {x}} \ mathrm {d} x \ right). \ end {aligned} }}

This is a singular distribution, since the integral expression is not defined in Lebesgue's sense and only exists as a main Cauchy value. The abbreviation PV stands for principal value.

This distribution is mostly used together with the dispersion relation ( Plemelj-Sokhotsky formula ), whereby all distributions, in particular and as indicated, are expressed by generalized functions and denote the imaginary unit. In linear response theory, this relationship connects the real and imaginary parts of a response function, see Kramers-Kronig relationships . (At this point it is assumed that the test functions are complex, so , and so are the response functions just mentioned; but the argument should still be real, although of course it is complex, and not real.) ${\ displaystyle \ textstyle \ lim _ {\ varepsilon \ to 0 ^ {+}} {\ tfrac {1} {x- \ mathrm {i} \ varepsilon}} \, = \, {\ rm {PV}} ( {\ tfrac {1} {x}}) + \ mathrm {i} \ pi \ delta (x)}$${\ displaystyle \ delta (\ phi)}$${\ displaystyle T (\ phi),}$${\ displaystyle \ mathrm {i}}$${\ displaystyle \ phi}$${\ displaystyle \ phi \ in \ mathbb {C}}$${\ displaystyle x}$${\ displaystyle x- \ mathrm {i} \ varepsilon}$

Oscillating integral

For all symbols one calls ${\ displaystyle a \ in S_ {1,0} ^ {m} (\ Omega \ times \ mathbb {R} ^ {n})}$

${\ displaystyle I (a) (x): = \ int _ {\ mathbb {R} ^ {n}} e ^ {i \ langle x, \ xi \ rangle} a (x, \ xi) \ mathrm {d } \ xi}$

an oscillating integral. Depending on the choice of, this integral type does not converge in the Riemann or Lebesgue sense, but only in the sense of distributions. ${\ displaystyle m}$

convergence

Since the distribution space is defined as a topological dual space , it also has a topology . As a dual space of a Montel space , provided with the strong topology , it is itself a Montel space, so the strong topology coincides with the weak - * topology for consequences . The following concept of convergence arises for sequences: A sequence of distributions converges to if the equation for each test function${\ displaystyle (T_ {n}) _ {n \ in \ mathbb {N}}}$${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle \ phi \ in {\ mathcal {D}} (\ Omega)}$

${\ displaystyle \ lim _ {j \ to \ infty} T_ {j} (\ phi) = T (\ phi)}$

applies.

Because each test function with can be identified, of as a topological subspace be construed. ${\ displaystyle f \ in {\ mathcal {D}} (\ Omega)}$${\ displaystyle \ textstyle T_ {f} (\ phi): = \ int _ {\ Omega} f (x) \ phi (x) \ mathrm {d} x}$${\ displaystyle {\ mathcal {D}} (\ Omega)}$${\ displaystyle {\ mathcal {D}} '(\ Omega)}$

The room is close in . This means that there is a sequence of test functions in with in for each distribution . So you can go through any distribution${\ displaystyle {\ mathcal {D}} (\ Omega)}$${\ displaystyle {\ mathcal {D}} '(\ Omega)}$${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle (T_ {j}) _ {j \ in \ mathbb {N}}}$${\ displaystyle {\ mathcal {D}} (\ Omega)}$${\ displaystyle \ textstyle \ lim _ {j \ to \ infty} T_ {j} = T}$${\ displaystyle {\ mathcal {D}} '(\ Omega)}$${\ displaystyle T}$

${\ displaystyle T (\ phi) = \ lim _ {j \ to \ infty} \ int _ {\ Omega} T_ {j} (x) \ phi (x) \ mathrm {d} x}$

represent.

Localization

Restriction to a subset

Be open subsets and be a distribution. The restriction from to the subset is defined by ${\ displaystyle Y \ subset \ Omega \ subset \ mathbb {R} ^ {n}}$${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$ ${\ displaystyle T | _ {Y}}$${\ displaystyle T}$${\ displaystyle Y}$

${\ displaystyle T | _ {Y} (\ phi) \ colon = T ({\ tilde {\ phi}})}$

for all , where that is continued through zero . ${\ displaystyle \ phi \ in {\ mathcal {D}} (Y)}$${\ displaystyle {\ tilde {\ phi}} \ in {\ mathcal {D}} (\ Omega)}$${\ displaystyle \ Omega \ setminus Y}$${\ displaystyle \ phi}$

carrier

Be a distribution. It is said that a point belongs to the bearer of and writes if a function exists for every open environment of . ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle x_ {0} \ in \ Omega}$${\ displaystyle T}$${\ displaystyle x_ {0} \ in \ mathrm {supp} (T)}$${\ displaystyle U \ subset \ Omega}$${\ displaystyle x_ {0}}$${\ displaystyle \ phi \ in {\ mathcal {D}} (U)}$${\ displaystyle T (\ phi) \ neq 0}$

If a regular distribution is continuous , then this definition is equivalent to the definition of the carrier of a function (the function ). ${\ displaystyle T}$${\ displaystyle T = T_ {f}}$${\ displaystyle f}$${\ displaystyle f}$

Compact carrier

A distribution has a compact carrier if there is a compact space . The set of distributions with compact carriers is denoted by. It is a sub-vector space from and the topological dual space to , the space of smooth functions . This space is made by the family of semi-norms${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle \ mathrm {supp} (T)}$${\ displaystyle {\ mathcal {E}} '}$${\ displaystyle {\ mathcal {D}} '}$${\ displaystyle {\ mathcal {E}}}$ ${\ displaystyle C ^ {\ infty}}$

${\ displaystyle \ phi \ mapsto \ sum _ {| \ alpha | \ leq m} \ sup _ {x \ in K} \ left | {\ frac {\ partial ^ {\ alpha}} {\ partial x ^ {\ alpha}}} \ phi (x) \ right |}$,

where takes on arbitrary values and passes through all compact subsets of , generating a locally convex topology . ${\ displaystyle m}$${\ displaystyle \ mathbb {N}}$${\ displaystyle K}$${\ displaystyle \ mathbb {R} ^ {n}}$

Singular carrier

Be a distribution. It is said that a point does not belong to the singular carrier if there is an open environment of and a function with ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle x_ {0} \ in \ Omega}$${\ displaystyle \ mathrm {sing \, supp} (T)}$${\ displaystyle U \ subset \ Omega}$${\ displaystyle x_ {0}}$${\ displaystyle f \ in C ^ {\ infty} (U)}$

${\ displaystyle T (\ phi) = \ int _ {U} f (x) \ phi (x) \ mathrm {d} x}$

for everyone . ${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (U)}$

In other words: if and only if there is no open environment of , so that the restriction of on equals a smooth function. In particular, the singular carrier of a singular distribution is not empty. ${\ displaystyle x_ {0} \ in \ mathrm {sing \, supp} (T)}$${\ displaystyle U}$${\ displaystyle x_ {0}}$${\ displaystyle T}$${\ displaystyle U}$

Operations on distributions

Since the distribution space with pointwise addition and multiplication with complex numbers is a vector space over the field , the addition of distributions and the multiplication of a complex number with a distribution are already defined. ${\ displaystyle \ mathbb {C}}$

In the following, further operations on distributions such as the derivation of a distribution are explained. Many operations are carried over to distributions by applying the appropriate operation to the test functions. For example, if a linear map, the one - test function to a mapping function, and there is also still a adjoint linear and follow continuous map such that for all test functions and applies ${\ displaystyle L \ colon {\ mathcal {D}} (\ Omega _ {1}) \ to L _ {\ text {loc}} ^ {1} (\ Omega _ {2})}$${\ displaystyle D (\ Omega _ {1})}$${\ displaystyle L _ {\ text {loc}} ^ {1} (\ Omega _ {2})}$${\ displaystyle L ^ {*} \ colon {\ mathcal {D}} (\ Omega _ {2}) \ to {\ mathcal {D}} (\ Omega _ {1})}$${\ displaystyle \ varphi \ in {\ mathcal {D}} (\ Omega _ {1})}$${\ displaystyle \ psi \ in {\ mathcal {D}} (\ Omega _ {2})}$

${\ displaystyle \ int _ {\ Omega _ {1}} \ varphi (x) (L ^ {*} \ psi) (x) \ mathrm {d} x = \ int _ {\ Omega _ {2}} ( L \ varphi) (x) \ psi (x) \ mathrm {d} x}$,

then

${\ displaystyle {\ tilde {L}} \ colon {\ mathcal {D}} '(\ Omega _ {1}) \ to {\ mathcal {D}}' (\ Omega _ {2}) {\ text { ,}} {\ tilde {L}} (u) (\ varphi) = u (L ^ {*} \ varphi)}$

a well-defined operation on distributions.

Multiplication by a function

Be and . Then the distribution is defined by ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle a \ in C ^ {\ infty} (\ Omega)}$${\ displaystyle aT \ in {\ mathcal {D}} '(\ Omega)}$

${\ displaystyle \ forall \ phi \ in {\ mathcal {D}} (\ Omega) \ colon (aT) (\ phi): = T (a \ phi)}$.

Differentiation

motivation

If one considers a continuously differentiable function and the regular distribution assigned to it, one obtains the calculation rule ${\ displaystyle f}$${\ displaystyle T_ {f}}$

{\ displaystyle {\ begin {aligned} (T_ {f ^ {\ prime}}, \ varphi) & = \ int _ {\ Omega} f ^ {\ prime} (t) \ varphi (t) \, \ mathrm {d} t \\ & = - \ int _ {\ Omega} f (t) \ varphi ^ {\ prime} (t) \, \ mathrm {d} t \\ & = - (T_ {f}, \ varphi ^ {\ prime}). \ end {aligned}}}

Here was partial integration used, the boundary terms because of the selected properties of the test function no longer apply. This corresponds to the weak derivative . The two outer terms are also defined for singular distributions. One uses this to define the derivative of any distribution . ${\ displaystyle \ varphi}$${\ displaystyle T}$

definition

So be a distribution, a multi-index and . Then the distribution derivative is defined by ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$${\ displaystyle \ alpha \ in \ mathbb {N} ^ {n}}$${\ displaystyle x \ in \ Omega}$ ${\ displaystyle \ partial _ {x} ^ {\ alpha} T \ in {\ mathcal {D}} '(\ Omega)}$

${\ displaystyle (\ partial _ {x} ^ {\ alpha} T) (\ phi): = (- 1) ^ {| \ alpha |} T (\ partial _ {x} ^ {\ alpha} \ phi) , \ quad \ forall \ phi \ in {\ mathcal {D}} (\ Omega)}$.

In the one-dimensional case, this just means

${\ displaystyle T '(\ varphi) = - T (\ varphi')}$.

The notation is often used for the distribution derivation . ${\ displaystyle D ^ {\ alpha} T}$

example

The Heaviside function is through ${\ displaystyle H \ colon \ mathbb {R} \ to \ mathbb {R}}$

${\ displaystyle H (x) = {\ begin {cases} 0: & x \ leq 0, \\ 1: & x> 0, \ end {cases}}}$

Are defined. With the exception of the position, it can be differentiated everywhere. You can think of it as a regular distribution and the bill ${\ displaystyle x = 0}$

{\ displaystyle {\ begin {aligned} (H ^ {\ prime}, \ phi) & = - (H, \ phi ^ {\ prime}) \\ & = - \ int _ {0} ^ {\ infty} 1 \ cdot \ phi ^ {\ prime} (x) \, \ mathrm {d} x \\ & = \ phi (0) \\ & = (\ delta, \ phi) \ end {aligned}}}

shows that its derivative (as distribution) is the delta distribution :

${\ displaystyle H ^ {\ prime} = \ delta.}$

You can also derive the delta distribution itself:

${\ displaystyle \ left (\ delta ^ {(n)}, \ phi \ right) = (- 1) ^ {n} \ left (\ delta, \ phi ^ {(n)} \ right) = (- 1 ) ^ {n} \ phi ^ {(n)} (0)}$

Except for the additional sign factor , the derivatives of the delta distribution are therefore the same as the derivatives of the test function at that point${\ displaystyle (-1) ^ {n}}$${\ displaystyle x = 0.}$

Tensor product

motivation

Be the set as a product space with given. Then you can click on the functions and by means of the regulation ${\ displaystyle G \ subset \ mathbb {R} ^ {2n}}$${\ displaystyle G: = G_ {1} \ times G_ {2}}$${\ displaystyle G_ {1}, G_ {2} \ subset \ mathbb {R} ^ {n}}$${\ displaystyle f_ {1} \ in C ^ {\ infty} (G_ {1})}$${\ displaystyle f_ {2} \ in C ^ {\ infty} (G_ {2})}$

${\ displaystyle (f_ {1} \ otimes f_ {2}) (x, y) \ mapsto f_ {1} (x) f_ {2} (y)}$

define a tensor product . Similarly, one can define a tensor product between distributions. For this purpose, regular distributions are first considered. Let and be two locally integrable functions, it follows from the above definition ${\ displaystyle f_ {1} \ in L_ {loc} ^ {1} (G_ {1})}$${\ displaystyle f_ {2} \ in L_ {loc} ^ {1} (G_ {2})}$

{\ displaystyle {\ begin {aligned} (f_ {1} \ otimes f_ {2}) (\ phi) = & \ int _ {G_ {1} \ times G_ {2}} f_ {1} (x) f_ {2} (y) \ phi (x, y) \ mathrm {d} (x, y) \\ = & \ int _ {G_ {1}} f_ {1} (x) \ int _ {G_ {2 }} f_ {2} (y) \ phi (x, y) \ mathrm {d} y \ mathrm {d} x \\ = & \ int _ {G_ {2}} f_ {2} (y) \ int _ {G_ {1}} f_ {1} (x) \ phi (x, y) \ mathrm {d} x \ mathrm {d} y \ end {aligned}}}

for all it follows ${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (G_ {1} \ times G_ {2}).}$

${\ displaystyle (f_ {1} \ otimes f_ {2}) (\ phi) = T_ {f_ {1}} (T_ {f_ {2}} (\ phi)) = T_ {f_ {2}} (T_ {f_ {1}} (\ phi)).}$

The following definition is derived from this:

definition

Be and . Then a distribution is made through ${\ displaystyle T_ {1} \ in {\ mathcal {D}} '(G_ {1})}$${\ displaystyle T_ {2} \ in {\ mathcal {D}} '(G_ {2})}$${\ displaystyle T_ {1} \ otimes T_ {2}}$${\ displaystyle {\ mathcal {D}} '(G_ {1} \ times G_ {2})}$

{\ displaystyle {\ begin {aligned} (T_ {1} \ otimes T_ {2}) (\ phi) &: = T_ {1} (T_ {2} (\ phi)) = T_ {2} (T_ { 1} (\ phi)) \\\ end {aligned}}}

is defined.

Smoothing a distribution

Distributions can be specifically smoothed or smeared or approximated , e.g. B. by replacing the distribution with the regular distribution of a smooth approximation function, such as B. the distribution through the regular distribution ${\ displaystyle \ delta}$${\ displaystyle \ delta}$

${\ displaystyle T _ {\ delta _ {a}} (\ varphi) = \ int \ limits _ {\ Omega} \ delta _ {a} (x) \ varphi (x) \ mathrm {d} x}$

the function defined above or the Heaviside distribution through the regular distribution of the integrals of such functions. With three-dimensional differential equations one can B. determine whether the boundary conditions match the differential equations that apply to the interior. This is useful for many applications, especially since the smoothing functions, except for the limit, are not clearly specified, which leads to increased flexibility. You can also regularize distributions like the above PV distribution in a targeted manner. B. provides the test functions with suitable factors or proceeds in another way. ${\ displaystyle \ delta _ {a} (x)}$

Folding with one function

definition

Let be a distribution and a function, then the convolution of with is defined by ${\ displaystyle T \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$${\ displaystyle T}$${\ displaystyle \ phi}$

${\ displaystyle (T * \ phi) (x): = T (\ phi (x- \ cdot))}$.

example

Let be a radon measure and be the distribution identified with the radon measure. Then applies to the convolution of with${\ displaystyle \ mu}$${\ displaystyle T _ {\ mu} \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$${\ displaystyle \ mu}$${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

${\ displaystyle (\ mu * \ phi) (x): = (T _ {\ mu} * \ phi) (x) = T _ {\ mu} (\ phi (x- \ cdot)) = \ int _ {\ mathbb {R} ^ {n}} \ phi (xy) \ mathrm {d} \ mu (y).}$

properties

• If is a smooth function, then the definition agrees with the convolution of functions.${\ displaystyle T}$
• The result of the convolution is a smooth function, so the following applies
${\ displaystyle (T * \ phi) \ in C ^ {\ infty} (\ mathbb {R} ^ {n})}$.
• For and , the convolution is associative, that is, it applies ${\ displaystyle T \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$${\ displaystyle \ phi, \ psi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$
${\ displaystyle (T * \ phi) * \ psi = T * (\ phi * \ psi) \ in C ^ {\ infty} (\ mathbb {R} ^ {n})}$.
• For each multi-index , the derivative of the convolution applies ${\ displaystyle \ alpha}$
${\ displaystyle \ partial ^ {\ alpha} (T * \ phi) = (\ partial ^ {\ alpha} T) * \ phi = T * (\ partial ^ {\ alpha} \ phi)}$.

Convolution of two distributions

definition

Be and two distributions, at least one of which has compact carriers. Then for all the convolution between these distributions is defined by ${\ displaystyle T_ {1}}$${\ displaystyle T_ {2}}$${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

${\ displaystyle (T_ {1} * T_ {2}) * \ phi = T_ {1} * (T_ {2} * \ phi)}$.

The image

${\ displaystyle C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n}) \ ni \ phi \ mapsto T_ {1} * (T_ {2} * \ phi)}$

is linear, continuous and commutes with displacements. Therefore there is a unique distribution so ${\ displaystyle T \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$

${\ displaystyle T_ {1} * (T_ {2} * \ phi) = T * \ phi}$

applies to all . ${\ displaystyle \ phi \ in C_ {c} ^ {\ infty} (\ mathbb {R} ^ {n})}$

Note: The condition that a distribution has compact carriers can be weakened even further.

properties

This definition is a generalization of the definitions already mentioned here. If you choose a function for a regular distribution, this corresponds to the definitions listed here. The following properties apply: ${\ displaystyle T_ {i}}$

• The convolution is commutative:
${\ displaystyle T_ {1} * T_ {2} = T_ {2} * T_ {1}}$
• The following applies to the carrier:
${\ displaystyle \ operatorname {supp} (T_ {1} * T_ {2}) \ subseteq \ operatorname {supp} (T_ {1}) + \ operatorname {supp} (T_ {2})}$
• For the singular carrier we get:
${\ displaystyle \ operatorname {sing} \, \ operatorname {supp} (T_ {1} * T_ {2}) \ subseteq \ operatorname {sing} \, \ operatorname {supp} (T_ {1}) + \ operatorname { sing} \, \ operatorname {supp} (T_ {2})}$

Tempered distributions

The tempered distributions form an excellent subset of the distributions in space considered so far . On the tempered distributions it is possible to explain the Fourier and Laplace transforms. ${\ displaystyle {\ mathcal {D}} (\ mathbb {R} ^ {n})}$

Fourier transform

In order to be able to define a Fourier transform on distributions, one must first restrict the set of distributions. Not every function is Fourier transformable, and in the same way one cannot explain the Fourier transform for every distribution. For this reason Laurent Schwartz developed the Schwartz space , which is named after him today , by defining this space using a family of semi-norms that are symmetrical in terms of multiplication with the position variable and differentiation according to them. Because the Fourier transform swaps differentiation with and multiplication with , this symmetry implies that the Fourier transform of a Schwartz function is again a Schwartz function. In this space, the Fourier transformation is therefore an automorphism , i.e. a continuous, linear and bijective mapping to itself. The topological dual space , i.e. the space of the continuous, linear functionals of , is called the space of tempered distributions. The set of tempered distributions is larger than the set of distributions with compact support, which is due to the fact that the set of Schwartz functions is a subset of the space of smooth functions. The smaller a function space, the larger its dual space. Therefore, the amount of temperature-controlled distributions in the room is also included. Because the set of smooth functions with a compact carrier is a subset of the Schwartz space. ${\ displaystyle {\ mathcal {F}}}$ ${\ displaystyle {\ mathcal {S}} (\ mathbb {R} ^ {n})}$${\ displaystyle x}$${\ displaystyle x}$${\ displaystyle x}$ ${\ displaystyle {\ mathcal {S}} '(\ mathbb {R} ^ {n})}$${\ displaystyle {\ mathcal {S}} (\ mathbb {R} ^ {n}) \ to \ mathbb {C}}$${\ displaystyle {\ mathcal {S}} '}$${\ displaystyle {\ mathcal {E}} '}$${\ displaystyle {\ mathcal {D}} '}$

The Fourier transform of can for all by ${\ displaystyle T \ in S '(\ mathbb {R} ^ {n})}$${\ displaystyle \ phi \ in S (\ mathbb {R} ^ {n})}$

${\ displaystyle {\ mathcal {F}} (T) (\ phi): = T ({\ mathcal {F}} (\ phi))}$

To be defined. The Fourier transform is also an automorphism. The Fourier transform of the delta function is a constant distribution . Another example of a tempered distribution is the Dirac comb mentioned above. ${\ displaystyle {\ mathcal {S}} '(\ mathbb {R} ^ {n})}$${\ displaystyle {\ mathcal {F}} ({\ mathcal {\ delta}}) (\ phi) = (2 \ pi) ^ {- n / 2} (\ phi)}$

Convolution theorem

In connection with the above definitions of the convolution of two distributions and the Fourier transform of a distribution, the convolution theorem is interesting, which can be formulated as follows:

Let be a tempered distribution and a distribution with a compact carrier, then and the convolution theorem for distributions says: ${\ displaystyle T_ {1} \ in {\ mathcal {S}} '(\ Omega)}$${\ displaystyle T_ {2} \ in {\ mathcal {E}} '(\ Omega)}$${\ displaystyle T_ {1} * T_ {2} \ in {\ mathcal {S}} '(\ Omega)}$

${\ displaystyle {\ mathcal {F}} (T_ {1} * T_ {2}) = (2 \ pi) ^ {\ tfrac {n} {2}} {\ mathcal {F}} (T_ {1} ) \ cdot {\ mathcal {F}} (T_ {2}).}$

The multiplication of two distributions is generally not defined. In this particular case, however, makes sense because it is a smooth function. ${\ displaystyle {\ mathcal {F}} (T_ {1}) \ cdot {\ mathcal {F}} (T_ {2})}$${\ displaystyle {\ mathcal {F}} (T_ {2})}$

Differential equations

Since every locally integrable function, in particular every function, generates a distribution, one can assign a distribution as a derivative to these functions in the weak sense . If distributions are allowed as a solution to a differential equation , the solution space of this equation increases. In the following it is shown briefly what a distributional solution of a differential equation is and how the fundamental solution is defined. ${\ displaystyle L_ {loc} ^ {1}}$${\ displaystyle L ^ {2}}$

Solutions in the distribution sense

Be

${\ displaystyle P (x, \ partial _ {x}) u = \ sum _ {| \ alpha | \ leq m} a _ {\ alpha} \ partial _ {x} ^ {\ alpha} u}$

a differential operator with smooth coefficient functions . A distribution is called a distribution solution by if the distributions created by and match. this means ${\ displaystyle a _ {\ alpha} \ in C ^ {\ infty} (G)}$${\ displaystyle u \ in {\ mathcal {D}} '(G)}$${\ displaystyle P (x, \ partial _ {x}) u (x) = f (x)}$${\ displaystyle P (x, \ partial _ {x}) u}$${\ displaystyle f}$

${\ displaystyle P (x, \ partial _ {x}) u (\ phi) = f (\ phi)}$

for everyone . If the distribution is regularly and even continuously differentiable, then a classical solution of the differential equation is. ${\ displaystyle \ phi \ in {\ mathcal {D}} (G)}$${\ displaystyle u}$${\ displaystyle m}$${\ displaystyle u}$

example

Constant functions

All distributional solutions of the one-dimensional differential equation

${\ displaystyle {\ frac {\ partial} {\ partial x}} u (x) = 0}$

are the constant functions. That is, for everyone the equation will ${\ displaystyle \ phi \ in {\ mathcal {D}} (\ mathbb {R})}$

${\ displaystyle \ int _ {\ mathbb {R}} {\ frac {\ partial u} {\ partial x}} (x) \ phi (x) \ mathrm {d} x = \ int _ {\ mathbb {R }} 0 \ phi (x) \ mathrm {d} x \ quad \ Longleftrightarrow \ quad \ int _ {\ mathbb {R}} {\ frac {\ partial u} {\ partial x}} (x) \ phi ( x) \ mathrm {d} x = 0}$

only solved by constant . ${\ displaystyle u}$

Poisson's equation

A prominent example is formal identity

${\ displaystyle \ Delta \, {\ frac {1} {| xy |}} = - 4 \ pi \, \ delta (xy)}$

from electrostatics , with the Laplace operator . That means precise ${\ displaystyle \ Delta}$

${\ displaystyle \ Delta \ int _ {\ mathbb {R} ^ {3}} {\ frac {\ phi (y)} {| xy |}} \ mathrm {d} ^ {3} y = -4 \ pi \ phi (x).}$

This means

${\ displaystyle U (x): = \ int _ {\ mathbb {R} ^ {3}} {\ frac {\ phi (y)} {| xy |}} \ mathrm {d} ^ {3} y}$

is a solution to Poisson's equation for all${\ displaystyle \ phi \ in {\ mathcal {D}} (\ mathbb {R} ^ {3})}$

${\ displaystyle \ Delta U (x) = - 4 \ pi \ phi (x).}$

It is also said that the Poisson equation considered here solves in the distributional sense. ${\ displaystyle {\ tfrac {1} {| xy |}}}$

Fundamental solutions

Be now a linear differential operator. A distribution is called a fundamental solution if the differential equation ${\ displaystyle P (x, \ partial _ {x})}$${\ displaystyle H \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$${\ displaystyle H}$

${\ displaystyle P (x, \ partial _ {x}) u = \ delta _ {0}}$

solves in the sense of distribution.

The set of all fundamental solutions of results from adding a special fundamental solution to the general homogeneous solution . The general homogeneous solution is the set of distributions for which holds. According to a theorem of Bernard Malgrange , every linear differential operator with constant coefficients has a fundamental solution . ${\ displaystyle P (x, \ partial _ {x})}$${\ displaystyle H}$${\ displaystyle H_ {0}}$${\ displaystyle P (x, \ partial _ {x}) u = 0}$${\ displaystyle H \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$

With the help of these fundamental solutions, solutions of corresponding inhomogeneous differential equations are obtained by convolution. Let be a smooth function (or, more generally, a distribution with compact support), then because of ${\ displaystyle f}$

${\ displaystyle P (x, \ partial _ {x}) (H * f) = P (x, \ partial _ {x}) H * f = \ delta _ {0} * f = f}$

a solution of in the form ${\ displaystyle P (x, \ partial _ {x}) u = f}$

${\ displaystyle u = H * f,}$

where is a fundamental solution of the differential operator just like above. ${\ displaystyle H \ in {\ mathcal {D}} '(\ mathbb {R} ^ {n})}$

Harmonious distributions

Analogous to the harmonic functions , one also defines harmonic distributions. So a distribution is called harmonic if it follows the Laplace equation${\ displaystyle T}$

${\ displaystyle \ Delta T = 0}$

in the distributional sense is sufficient. Since the distributional derivative is more general than the ordinary differential , one might expect more solutions to Laplace's equation. However, this is wrong. Because one can prove that for every harmonic distribution there is a smooth function that produces that distribution. So there are no singular distributions that satisfy the equation; in particular, the singular carrier of a harmonic distribution is empty. This statement is even more general for elliptic partial differential equations . For physicists and engineers, this means that they can safely work with distributions in electrodynamics , for example in the theory of Maxwell's equations , even if they are only interested in ordinary functions. ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega)}$

Distributions as integral kernels

You can go through every test function${\ displaystyle K \ in {\ mathcal {D}} (\ Omega _ {1} \ times \ Omega _ {2})}$

${\ displaystyle ({\ mathcal {K}} \ phi) (x) = \ int _ {\ Omega _ {2}} K (x, y) \ phi (y) \ mathrm {d} y}$

identify with an integral operator . This identification can be extended to distributions. So there is a linear operator for every distribution${\ displaystyle {\ mathcal {K}} \ colon {\ mathcal {D}} (\ Omega _ {2}) \ to C (\ Omega _ {1})}$${\ displaystyle K \ in {\ mathcal {D}} '(\ Omega _ {1} \ times \ Omega _ {2})}$

${\ displaystyle {\ mathcal {K}} \ colon {\ mathcal {D}} (\ Omega _ {2}) \ to {\ mathcal {D}} '(\ Omega _ {1}),}$

for everyone and through ${\ displaystyle \ psi \ in {\ mathcal {D}} (\ Omega _ {1})}$${\ displaystyle \ phi \ in {\ mathcal {D}} (\ Omega _ {2})}$

${\ displaystyle ({\ mathcal {K}} \ phi) (\ psi) = K (\ phi \ otimes \ psi)}$

given is. The reverse direction also applies. So there is a unique distribution for each operator so that applies. This identification between operator and distribution is the message of Schwartz's core theorem. The distribution also bears the name Schwartz kernel based on the concept of the integral kernel . However, the operator cannot always be represented in the form of an integral term. ${\ displaystyle {\ mathcal {K}}}$${\ displaystyle K,}$${\ displaystyle ({\ mathcal {K}} \ phi) (\ psi) = K (\ phi, \ psi)}$${\ displaystyle {\ mathcal {K}} \ colon {\ mathcal {D}} (\ Omega _ {2}) \ to {\ mathcal {D}} '(\ Omega _ {1})}$${\ displaystyle K \ in {\ mathcal {D}} '(\ Omega _ {1} \ times \ Omega _ {2})}$${\ displaystyle K}$${\ displaystyle {\ mathcal {K}} \ colon {\ mathcal {D}} (\ Omega _ {2}) \ to {\ mathcal {D}} '(\ Omega _ {1})}$

Distributions on Manifolds

Return transport

Distributions can be transported back and forth on real subsets with the help of diffeomorphisms . Let be two real subsets and a diffeomorphism, i.e. a continuously differentiable, bijective function whose inverse mapping is also continuously differentiable. The equation applies to and for all test functions due to the transformation theorem${\ displaystyle \ Omega _ {1}, \, \ Omega _ {2} \ subset \ mathbb {R} ^ {n}}$${\ displaystyle \ psi \ colon \ Omega _ {1} \ to \ Omega _ {2}}$${\ displaystyle u \ in C (\ Omega _ {2})}$${\ displaystyle u \ circ \ psi \ in C (\ Omega _ {1})}$${\ displaystyle \ phi \ in {\ mathcal {D}} (\ Omega _ {1})}$

${\ displaystyle \ int _ {\ Omega _ {1}} u (\ psi (x_ {1})) \ phi (x_ {1}) \ mathrm {d} x_ {1} = \ int _ {\ Omega _ {2}} u (x_ {2}) \ phi (\ psi ^ {- 1} (x_ {2})) \ left | \ det \ left ({\ frac {\ partial} {\ partial x_ {2} }} \ psi ^ {- 1} (x_ {2}) \ right) \ right | \ mathrm {d} x_ {2}.}$

This identity motivates the following definition for the concatenation of a distribution with a diffeomorphism: Let , then is defined for all by ${\ displaystyle T \ in {\ mathcal {D}} '(\ Omega _ {2})}$${\ displaystyle T \ circ \ psi \ in {\ mathcal {D}} '(\ Omega _ {1})}$${\ displaystyle \ phi \ in {\ mathcal {D}} '(\ Omega _ {2})}$

${\ displaystyle (T \ circ \ psi) (\ phi) \ colon = T \ left (\ left (\ phi \ circ \ psi ^ {- 1} \ right) \ left | \ det \ left ({\ frac { \ partial} {\ partial x}} \ psi ^ {- 1} \ right) \ right | \ right).}$

Mostly it is noted as and is called the return transport of the distribution${\ displaystyle T \ circ \ psi}$${\ displaystyle \ psi ^ {*} \, T}$${\ displaystyle \ psi ^ {*}}$${\ displaystyle T.}$

definition

Be a smooth manifold , a system of cards, and such that for all${\ displaystyle X}$${\ displaystyle (\ psi _ {i} \ colon \ Omega _ {i} \ subset X \ to {\ tilde {\ Omega}} _ {i} \ subset \ mathbb {R} ^ {n}) _ {i \ in I}}$${\ displaystyle T_ {i} \ in {\ mathcal {D}} '({\ tilde {\ Omega}} _ {i})}$${\ displaystyle \ phi \ in {\ mathcal {D}} ({\ tilde {\ Omega}} _ {i} \ cap {\ tilde {\ Omega}} _ {j})}$

${\ displaystyle T_ {j} (\ phi) \ colon = (\ psi _ {i} \ circ \ psi _ {j} ^ {- 1}) ^ {*} \, T_ {i} (\ phi)}$

in applies. Then the system is called a distribution . This distribution is clearly defined and independent of the choice of card. ${\ displaystyle \ psi _ {i} (\ Omega _ {j} \ cap \ Omega _ {i})}$${\ displaystyle T \ colon = (T_ {i}) _ {i \ in I}}$${\ displaystyle X}$${\ displaystyle T \ in {\ mathcal {D}} '(X)}$

There are other ways to define distributions on manifolds. The definition in connection with density bundles has the advantage that no system of local maps has to be selected there.

Regular distributions on manifolds

With this definition one can again assign a distribution to every continuous function by means of the integral representation. So be a continuous function on the manifold, then there is a continuous function on . Using the integral representation for regular distributions ${\ displaystyle u \ in C (M)}$${\ displaystyle u \ circ \ psi _ {i} ^ {- 1}}$${\ displaystyle {\ tilde {\ Omega}} _ {i} \ subset \ mathbb {R} ^ {n}}$

${\ displaystyle T_ {i} (\ phi) \ colon = \ int _ {{\ tilde {\ Omega}} _ {i}} (u \ circ \ psi _ {i} ^ {- 1}) (x_ { i}) \ phi (x_ {i}) \ mathrm {d} x_ {i}}$

you get a system that forms a distribution on . ${\ displaystyle (T_ {i}) _ {i \ in I},}$${\ displaystyle M}$

Individual evidence

1. ^ Paul Adrien Maurice Dirac: The principles of quantum mechanics. Clarendon Press, 1947.
2. Sergei Lwowitsch Sobolew: Méthode nouvelle à résoudre leproblemème de Cauchy pour les équations linéaires hyperboliques normales. Mat. Sb. 1, 1936, pp. 39-72.
3. ^ Laurent Schwartz: Théorie des distributions 1–2. Hermann, 1950–1951.
4. ^ Laurent Schwartz: Théorie des distributions. 1-2, p. 74, Hermann, 1950-1951.
5. ^ Lars Hörmander : The Analysis of Linear Partial Differential Operators. Volume 1: Distribution Theory and Fourier Analysis. Second edition. Springer-Verlag, Berlin et al. 1990, ISBN 3-540-52345-6 ( Basic Teachings of Mathematical Sciences 256), p. 41.
6. ^ Lars Hörmander : The Analysis of Linear Partial Differential Operators. Volume 1: Distribution Theory and Fourier Analysis. Second edition. Springer-Verlag, Berlin et al. 1990, ISBN 3-540-52345-6 ( Basic Teachings of Mathematical Sciences 256), pp. 41–42.
7. ^ Lars Hörmander : The Analysis of Linear Partial Differential Operators. Volume 1: Distribution Theory and Fourier Analysis. Second edition. Springer-Verlag, Berlin et al. 1990, ISBN 3-540-52345-6 ( Basic Teachings of Mathematical Sciences 256), p. 42.