# Weak solution

In mathematics , a weak solution of an ordinary or partial differential equation , also called a generalized solution , is a function for which all derivatives may not exist, but which can nevertheless be viewed in a precise sense as the solution of the equation. There are many different definitions of weak solutions, each suitable for different classes of equations, one of the most important of which is based on the concept of distribution .

To avoid the language of distributions, a differential equation is rewritten in such a way that there are no more derivatives of the solution function. This new form of the equation is called the weak formulation and its solutions are called weak solutions . Surprisingly, a differential equation can have a non-differentiable solution in this way, and these can be found using the weak formulation.

Weak solutions are important because modeling real-world problems often leads to differential equations without sufficiently smooth solutions. The only solution then is weak formulation. Even in situations in which an equation has differentiable solutions, it is often advantageous to first prove the existence of weak solutions and then to show at a later point in time that the solutions are sufficiently smooth.

## A concrete example

To illustrate this, consider the following first order wave equation

${\ displaystyle {\ frac {\ partial u} {\ partial t}} + {\ frac {\ partial u} {\ partial x}} = 0 \ quad \ quad (1)}$ (see Partial Derivation for the term used), where is a function of two real variables. If one now assumes that it is continuously differentiable in the Euclidean plane , multiplies equation (1) by a smooth function with a compact support and integrates it, one obtains ${\ displaystyle u = u (t, x)}$ ${\ displaystyle u}$ ${\ displaystyle \ mathbb {R} ^ {2}}$ ${\ displaystyle \ varphi}$ ${\ displaystyle \ int _ {- \ infty} ^ {\ infty} \ int _ {- \ infty} ^ {\ infty} {\ frac {\ partial u (t, x)} {\ partial t}} \ varphi (t, x) \, \ mathrm {d} t \ mathrm {d} x + \ int _ {- \ infty} ^ {\ infty} \ int _ {- \ infty} ^ {\ infty} {\ frac {\ partial u (t, x)} {\ partial x}} \ varphi (t, x) \, \ mathrm {d} t \ mathrm {d} x = 0.}$ With Fubini's theorem , which allows the order of integration to be changed, and by means of partial integration (with regard to the first term and with regard to the second term) one obtains from this ${\ displaystyle t}$ ${\ displaystyle x}$ ${\ displaystyle - \ int _ {- \ infty} ^ {\ infty} \ int _ {- \ infty} ^ {\ infty} u (t, x) {\ frac {\ partial \ varphi (t, x)} {\ partial t}} \, \ mathrm {d} t \ mathrm {d} x- \ int _ {- \ infty} ^ {\ infty} \ int _ {- \ infty} ^ {\ infty} u (t , x) {\ frac {\ partial \ varphi (t, x)} {\ partial x}} \, \ mathrm {d} t \ mathrm {d} x = 0. \ quad \ quad (2)}$ Note: Note that although the integration goes from to , the integration actually only runs over a finite rectangle, since it has a compact support, and that for this reason the partial integration does not yield any boundary terms. ${\ displaystyle - \ infty}$ ${\ displaystyle \ infty}$ ${\ displaystyle \ varphi}$ We have shown that equation (2) follows from equation (1) if is continuously differentiable. The idea of ​​the weak solution is that there are functions that satisfy equation (2) for each , but which are not differentiable and therefore cannot solve equation (1). A simple example of such a function is . (That equation (2) is actually fulfilled can easily be shown by partial integration, in each case in the areas below and above the straight line ). A solution of equation (2) is called a weak solution of equation (1). ${\ displaystyle u}$ ${\ displaystyle u}$ ${\ displaystyle \ varphi}$ ${\ displaystyle u (t, x) = | tx |}$ ${\ displaystyle u}$ ${\ displaystyle x = t}$ ## The general case

As in the example above, the general idea is to multiply the differential equation to be solved with the function you are looking for with so-called test functions and to pass all derivatives from by means of partial integration to . The equation thus obtained can then also have non-differentiable solutions. ${\ displaystyle u}$ ${\ displaystyle \ varphi}$ ${\ displaystyle u}$ ${\ displaystyle \ varphi}$ The approach outlined above also works for more general equations. To do this, consider the linear differential operator

${\ displaystyle P: C ^ {k} (W) \ to C ^ {0} (W), P (u) (x) {\ overset {\ text {Def.}} {=}} \ sum _ { \ vert \ alpha \ vert \ leq k} a _ {\ alpha} (x) D ^ {\ alpha} u (x)}$ on an open subset , where is a multi-index , and the coefficient functions are sufficiently smooth. In addition, the " -th partial derivative of ." ${\ displaystyle W \ subset \ mathbb {R} ^ {n}}$ ${\ displaystyle \ alpha = (\ alpha _ {1}, \ ldots, \ alpha _ {n}) \ in \ mathbb {N} ^ {n}}$ ${\ displaystyle a _ {\ alpha}}$ ${\ displaystyle D ^ {\ alpha} u = {\ frac {\ partial ^ {\ alpha _ {1}}} {\ partial x_ {1} ^ {\ alpha _ {1}}}} {\ frac {\ partial ^ {\ alpha _ {2}}} {\ partial x_ {2} ^ {\ alpha _ {2}}}} \ dots {\ frac {\ partial ^ {\ alpha _ {n}}} {\ partial x_ {n} ^ {\ alpha _ {n}}}} u}$ ${\ displaystyle \ alpha}$ ${\ displaystyle u}$ After multiplication with a smooth test function with compact support, the differential equation can now be converted into an equation after partial integration ${\ displaystyle P (u) (x) = 0}$ ${\ displaystyle W}$ ${\ displaystyle \ int _ {W} u (x) Q (\ varphi) (x) \, \ mathrm {d} x = 0}$ be transferred, where the differential operator is defined as follows: ${\ displaystyle Q (\ varphi) (x)}$ ${\ displaystyle Q (\ varphi) (x) = \ sum _ {\ vert \ alpha \ vert \ leq k} (- 1) ^ {\ vert \ alpha \ vert} D ^ {\ alpha} \ left [a_ { \ alpha} (x) \ varphi (x) \ right].}$ The factor occurs here, because to roll over all partial derivatives of total partial integrations are required, which each time results in a multiplication by −1. The differential operator is called the formally adjoint operator to . ${\ displaystyle (-1) ^ {\ vert \ alpha \ vert}}$ ${\ displaystyle u}$ ${\ displaystyle \ vert \ alpha \ vert}$ ${\ displaystyle Q}$ ${\ displaystyle P}$ In summary, the original (strong) problem is to use a defined- times differentiable function with ${\ displaystyle W}$ ${\ displaystyle | \ alpha |}$ ${\ displaystyle P (u) (x) = 0 \ qquad {\ text {for all}} x \ in W}$ to find a so-called strong solution . Instead, we are now looking for an integrable function that ${\ displaystyle u}$ ${\ displaystyle \ int _ {W} u (x) Q (\ varphi) (x) \, \ mathrm {d} x = 0}$ for any smooth function with compact support in fulfilled, a so-called weak solution . ${\ displaystyle \ varphi}$ ${\ displaystyle W}$ ## Distribution solutions

The weak concept of solution is significantly expanded by the observation that the weak formulation depends linearly on the functions , which is ultimately due to the linearity of the integral. If an integrable function is on and is the vector space of all test functions, that is to say infinitely often differentiable functions with compact support in , then depends linearly on, that is, can as a linear functional${\ displaystyle \ varphi}$ ${\ displaystyle u}$ ${\ displaystyle W}$ ${\ displaystyle {\ mathcal {D}} (W)}$ ${\ displaystyle W}$ ${\ displaystyle \ textstyle \ int _ {W} u (x) \ varphi (x) \ mathrm {d} x}$ ${\ displaystyle \ varphi}$ ${\ displaystyle u}$ ${\ displaystyle T_ {u} \ colon {\ mathcal {D}} (W) \ rightarrow \ mathbb {R}, \ quad T_ {u} (\ varphi) = \ int _ {W} u (x) \ varphi (x) \ mathrm {d} x}$ can be understood on the vector space. ${\ displaystyle {\ mathcal {D}} (W)}$ We want to expand the space of possible solutions to all linear functionals . To do this, we have to be able to derive such functionals and multiply them with functions, because that's what we did with . The idea is to shift the desired operations back over to the test functions. If we consider a derivation , so is ${\ displaystyle T \ colon {\ mathcal {D}} (W) \ rightarrow \ mathbb {R}}$ ${\ displaystyle u}$ ${\ displaystyle \ partial _ {i} u}$ ${\ displaystyle T _ {\ partial _ {i} u} (\ varphi) = \ int _ {W} \ partial _ {i} u (x) \ varphi (x) \ mathrm {d} x = - \ int _ {W} u (x) \ partial _ {i} \ varphi (x) \ mathrm {d} x = -T_ {u} (\ partial _ {i} \ varphi)}$ .

Hence we define the partial derivative of a by the formula ${\ displaystyle \ partial _ {i} T}$ ${\ displaystyle T \ colon {\ mathcal {D}} (W) \ rightarrow \ mathbb {R}}$ ${\ displaystyle (\ partial _ {i} T) (\ varphi): = - T (\ partial _ {i} \ varphi)}$ .

This is initially well-defined, since it is again a test function, i.e. the right-hand side of the above definition can be formed, and the preceding calculation shows that, because of the minus sign due to partial integration, it is actually an extension of the derivation term for functions. ${\ displaystyle \ partial _ {i} \ varphi}$ ${\ displaystyle \ partial _ {i} T_ {u} = T _ {\ partial _ {i} u}}$ In a very analogous way, we define how to multiply with any number of differentiable functions . The situation is much simpler here, we bet ${\ displaystyle T}$ ${\ displaystyle a \ colon W \ rightarrow \ mathbb {R}}$ ${\ displaystyle (aT) (\ varphi): = T (a \ varphi)}$ and notice for the well-definition that there is again a test function. ${\ displaystyle a \ varphi}$ If we go with these definitions in the above formula for the formally adjoint operator , we get ${\ displaystyle Q}$ {\ displaystyle {\ begin {aligned} \ int _ {W} u (x) Q (\ varphi) (x) \, \ mathrm {d} x & = \ int _ {W} u (x) \ sum _ { \ vert \ alpha \ vert \ leq k} (- 1) ^ {\ vert \ alpha \ vert} D ^ {\ alpha} \ left [a _ {\ alpha} (x) \ varphi (x) \ right] \, \ mathrm {d} x \\ & = \ sum _ {\ vert \ alpha \ vert \ leq k} (- 1) ^ {\ vert \ alpha \ vert} \ int _ {W} u (x) D ^ { \ alpha} \ left [a _ {\ alpha} (x) \ varphi (x) \ right] \, \ mathrm {d} x \\ & = \ sum _ {\ vert \ alpha \ vert \ leq k} (- 1) ^ {\ vert \ alpha \ vert} T_ {u} (D ^ {\ alpha} \ left [a _ {\ alpha _ {1}, \ dots, \ alpha _ {n}} \ varphi \ right]) \\ & = \ sum _ {\ vert \ alpha \ vert \ leq k} (D ^ {\ alpha} T_ {u}) (a _ {\ alpha} \ varphi) \\ & = \ sum _ {\ vert \ alpha \ vert \ leq k} a _ {\ alpha} D ^ {\ alpha} T_ {u} (\ varphi) \ end {aligned}}} Therefore, it makes sense to look for linear functionals that ${\ displaystyle T \ colon {\ mathcal {D}} (W) \ rightarrow \ mathbb {R}}$ ${\ displaystyle \ sum _ {\ vert \ alpha \ vert \ leq k} a _ {\ alpha} D ^ {\ alpha} T (\ varphi) = 0}$ meet for everyone . A closer look shows that one should limit oneself to those functionals that meet a certain continuity condition. Such functionals are called distributions and a distribution that satisfies the above equation is called a distribution solution or a weak solution of the differential equation. ${\ displaystyle \ varphi}$ It is now possible to set up a solution theory for weak solutions. This is then flanked by sentences that specify the conditions for when weak solutions are distributions that come from strong solutions . This finally leads to the originally sought solutions of the differential equation. ${\ displaystyle T_ {u}}$ ${\ displaystyle u}$ ## Other types of weak solutions

The distribution-based notion of the weak solution is not always satisfactory. In the case of hyperbolic systems , uniqueness statements are missing. It is therefore necessary to supplement the problem with entropy conditions or other selection criteria. In the case of non-linear partial differential equations such as the Hamilton-Jacobi differential equation, there is a slightly different concept of the weak solution, the so-called viscosity solution .

## literature

• LC Evans, Partial Differential Equations , American Mathematical Society, Providence, 1998. ISBN 0-8218-0772-2

## Individual evidence

1. ^ G. Folland: Introduction to Partial Differential Equations , Princeton University Press (1976), Chapter 1 A: Basic Concepts
2. Christof Eck, Harald Garcke, Peter Knabner: Mathematische Modellierung , Springer-Verlag (2011), ISBN 978-3-540-74967-7 , Chapter 6.3, page 396