# Galerkin method

The Galerkin method (also Galerkin approach , after Boris Galerkin ) is a numerical method for the approximate solution of operator equations , such as partial differential equations . It represents the most common variant of the “weighted residual method”, in which the resulting residual of an approximate solution is minimized.

## short version

John William Strutt, 3rd Baron Rayleigh and Walther Ritz set the function sought in variation problems as a linear combination of basic functions and thus reduced the variation problem to a common problem of optimizing a function of parameters.

For solving an operator equation

${\ displaystyle T \ left (f (x) \ right) = 0}$

the function you are looking for can also be used, for example as

${\ displaystyle f \ left (x \ right) = \ sum _ {i} c_ {i} \ Phi _ {i} \ left (x \ right),}$

which, when substituted into the operator equation on the left side of the equal sign, results in a function that is dependent on the coefficients . According to the weighted residual method, the free coefficients are chosen in such a way that this function disappears in the space that is spanned by certain basic functions , i.e. H. becomes orthogonal to these basis functions. This gives the following equations for determining the${\ displaystyle c_ {i}}$${\ displaystyle c_ {i}}$${\ displaystyle \ Psi _ {j} (x)}$${\ displaystyle c_ {i}}$

${\ displaystyle \ int \ Psi _ {j} (x) T \ left (\ sum _ {i} c_ {i} \ Phi _ {i} (x) \ right) dx = 0,}$

which, if the operator is linear, represent a system of linear equations . This condition is also called Galerkin orthogonality . For one receives a point collocation procedure, for the Galerkin procedure, which is also ascribed to Iwan Grigorjewitsch Bubnow in Russian books , there it is called Bubnov-Galerkin procedure. ${\ displaystyle \ Psi _ {j} (x) = \ delta (x-x_ {j})}$${\ displaystyle \ Psi _ {j} (x) = \ Phi _ {j} (x)}$

## Derivation

The starting point for the Galerkin method is a so-called "variational" formulation of the initial value problem.

So let the initial value problem (AWA) be given with on an interval${\ displaystyle u '(t) = f (t, u (t)), u (t_ {0}) = u_ {0}}$${\ displaystyle I = [t_ {0}, t_ {0} + T]}$

We also accept the AWA as (global) Lipschitz continuous. That means that there are clear solutions. ( Theorem of Picard-Lindelöf )

Then the differential equation of the initial value problem is first multiplied by a "test function" and integrated over the solution interval . We get from the AWA ${\ displaystyle \ phi}$${\ displaystyle I}$

${\ displaystyle u '(t) = f (t, u (t)) \ Rightarrow u' (t) \ phi (t) = f (t, u (t)) \ phi (t) \ Rightarrow \ int _ {I} u '(t) \ phi (t) dt = \ int _ {I} f (t, u (t)) \ phi (t) dt}$

This relationship makes sense for every continuous and piecewise continuously differentiated function . The vector space of all these functions is named from here on . “Piece by piece” means here that the differentiability is only required up to a finite number of exceptions . The left integral is to be understood piecewise as the sum of partial integrals. ${\ displaystyle u}$${\ displaystyle V}$${\ displaystyle I}$

Every function that fulfills the initial conditions of the starting value and satisfies the integral relationship for every test function is also a solution to the initial value problem. ${\ displaystyle u}$${\ displaystyle \ phi}$

The Galerkin method determines an approximate solution in a finite dimensional subspace by the rules of the starting point and the integral equation in the subspace: for any one . ${\ displaystyle u_ {h}}$${\ displaystyle V_ {h} \ subset V}$${\ displaystyle u_ {h} (t_ {0}) = u_ {0}}$${\ displaystyle \ int _ {I} u_ {h} '(t) \ phi _ {h} (t) dt = \ int _ {I} f (t, u_ {h} (t)) \ phi _ { h} (t) dt}$${\ displaystyle \ phi _ {h} \ in W_ {h}}$

The discrete test room is usually different from what you choose. Choose e.g. ${\ displaystyle W_ {h}}$${\ displaystyle V_ {h}}$

${\ displaystyle V_ {h}: = \ {v_ {h}: I \ rightarrow \ mathbb {R}: v_ {h} \ in C [I], v_ {h | (t_ {n-1}, t_ { n})} \ in P_ {1}, n = 1, \ dots, N \},}$
${\ displaystyle W_ {h}: = \ {\ phi _ {h}: I \ rightarrow \ mathbb {R}: \ phi _ {h | (t_ {n-1}, t_ {n})} \ in P_ {0}, n = 1, \ dots, N \}}$

The integral determinant equation can be restricted to each individual subinterval , since the test functions only have to be piecewise continuous. ${\ displaystyle [t_ {n-1}, t_ {n}]}$

${\ displaystyle u_ {h} (t_ {n}) - u_ {h} (t_ {n-1}) = \ int _ {t_ {n-1}} ^ {t_ {n}} u_ {h} ' (t) dt = \ int _ {t_ {n-1}} ^ {t_ {n}} f (t, u (t)) dt}$
${\ displaystyle \ Rightarrow u_ {h} (t_ {n}) = u_ {h} (t_ {n-1}) + \ int _ {t_ {n-1}} ^ {t_ {n}} f (t , u (t)) dt}$

That means: The Galerkin method is a "time step method". For example, if you evaluate the integral on the right-hand side with the trapezoidal rule, then we get the values ​​for ${\ displaystyle y_ {n}: = u_ {h} (t_ {n}),}$

${\ displaystyle y_ {n} = y_ {n-1} + {\ frac {1} {2}} h_ {n} (f (t_ {n}, y_ {n}) + f (t_ {n-1 }, y_ {n-1}))}$

## For the "variational" formulation of the initial value problem

As in the derivation section, the starting point is the AWA with Lipschitz condition and . Occurring functions can also be vector valued and denotes the Euclidean scalar product. For a function , i.e. a complex-valued, differentiated function with dimension d (since every higher order case can be traced back to the first order case) with the initial value , the AWA and the equivalent formulation in the derivation is equivalent to: ${\ displaystyle I = [t_ {0}, t_ {0} + T]}$${\ displaystyle <.,.>}$${\ displaystyle u \ in C ^ {1} (I) ^ {d}}$${\ displaystyle u (t_ {0}) = u_ {0}}$

${\ displaystyle \ int _ {I} dt = 0, \ quad \ forall \ phi \ in C (I) ^ {d}}$

Since the functions can vary as desired, this formulation of the AWA is called "variational". ${\ displaystyle \ phi}$

Expressed geometrically, the variational formulation of the AWA says that the residual of the solution function  : ${\ displaystyle u}$

${\ displaystyle R (u): = u'-f <., u>}$with regard to the scalar product of orthogonal to all test functions${\ displaystyle L ^ {2} (I) ^ {d}}$${\ displaystyle \ phi \ in C (I) ^ {d}}$

## More detailed representation

### method

The residual is distributed in the area under consideration. It is weighted with suitable weighting functions, hence the term “weighted residuals”. The integral of the residual weighted over the area should be as small as possible or, better still, should disappear completely. The weighting functions have parameters, the number of which corresponds to the number of degrees of freedom of the system. These lead to just as many equations and thus to the same large system of equations known from the finite element method . In the Galerkin method, the weighting functions are identical to the shape functions in the elements.

### example

Be a differential operator . We are looking for the solution of the differential equation: ${\ displaystyle D}$${\ displaystyle u (x)}$

${\ displaystyle D (u) (x) + f (x) = 0}$ (Equation 1)

with a given function and additional boundary conditions for . To this end, an approximate solution for scheduled as a linear combination of basis functions from a functional space : ${\ displaystyle f (x)}$${\ displaystyle u}$${\ displaystyle v}$${\ displaystyle u}$${\ displaystyle \ Phi _ {i} (x)}$ ${\ displaystyle V}$

${\ displaystyle v (x) = \ sum _ {i = 1} ^ {N} c_ {i} \ Phi _ {i} (x)}$

with coefficients to be determined . In general, the function does not yet satisfy the differential equation (1), it remains a residue ${\ displaystyle c_ {i}}$${\ displaystyle v}$

${\ displaystyle r (x) = D (v) (x) + f (x).}$

In the room one is inner product defined applies to that is if for all functions from is. The inner product is often defined as ${\ displaystyle V}$ ${\ displaystyle \ langle h, g \ rangle}$${\ displaystyle g = 0}$${\ displaystyle \ langle h, g \ rangle = 0}$${\ displaystyle h}$${\ displaystyle V}$

${\ displaystyle \ langle h, g \ rangle = \ int h (x) g (x) dx.}$

Often one cannot determine the exact solution for which vanishes for each test function (and with it the residual as well), but only an approximate solution for which the inner product of the residual vanishes with a set of selected linearly independent "weight functions" : ${\ displaystyle \ langle w, r \ rangle}$${\ displaystyle w}$${\ displaystyle w}$

${\ displaystyle \ langle w, r \ rangle = 0.}$

In the Galerkin method, the basic functions , of, are chosen as weight functions , so that a system of equations results for the coefficients : ${\ displaystyle \ Phi _ {j}}$${\ displaystyle j = 1, \ dots, N}$${\ displaystyle V}$${\ displaystyle c_ {i}}$

${\ displaystyle \ left \ langle \ Phi _ {j}, D \ left (\ sum _ {i = 1} ^ {N} c_ {i} \ Phi _ {i} \ right) + f \ right \ rangle = 0}$

### field of use

The Galerkin method can be used if no natural extremal principle exists for the solution of the differential equation. It is thus a basis of the finite element method and extends its applicability to other physical problems (continuum problems) that do not have such a natural extremal principle. Examples of this are steady or unsteady flows. A natural extremal principle (natural variation principle), however, exists e.g. B. with mechanical problems of solid mechanics , where the energy content must have a minimum.

According to Olgierd Cecil Zienkiewicz , the Galerkin solution is identical to a natural variation solution, or can at least be interpreted as such. The finite element method (FEM) is a special Ritz-Galerkin method.