# Ordinary differential equation

An ordinary differential equation (often abbreviated as GDGL or ODE , English ordinary differential equation ) is a differential equation in which only derivatives for exactly one variable occur for a function.

Many physical , chemical and biological processes in nature can be described mathematically with such equations, e.g. B. the radioactive decay , movement processes of bodies, many types of vibration processes or the growth behavior of animal populations . Ordinary differential equations are therefore often used in scientific models to analyze and simulate such processes or to be able to make predictions . In many cases the differential equation cannot be solved analytically . One is therefore dependent on numerical methods . See main article: List of Numerical Procedures # Numerics of Ordinary Differential Equations .

## Historical development

Historically, the first differential equations were used to model the motion of objects. Particularly noteworthy are the equations for the movement with constant speed or constant acceleration. In 1590 , Galileo Galilei recognized the connection between the time of a body falling and its speed of fall as well as the fall path and (still) formulated the law of free fall using geometric means .

When Isaac Newton also considered movements with friction, which are proportional to the magnitude or the square of the speed, he was forced to introduce differential calculus and the formalism of differential equations that is common today.

By precisely formulating the concept of limit values , the derivative and the integral , Augustin Louis Cauchy finally put the theory of ordinary differential equations on a solid foundation in the 19th century and thus made them accessible to many sciences.

The scientific interest in differential equations is essentially based on the fact that complete models can be created with them on the basis of comparatively simple observations and experiments.

Only a few types of differential equations can be solved analytically. Nevertheless, qualitative statements such as stability , periodicity or bifurcation can also be made if the differential equation cannot be solved explicitly. One of the most important tools for scalar differential equations are arguments using a comparison theorem .

## general definition

Be and be a continuous function. Then is called ${\ displaystyle \ Omega \ subseteq \ mathbb {R} \ times \ left (\ mathbb {R} ^ {m} \ right) ^ {n + 1}, n \ in \ mathbb {N}}$${\ displaystyle f \ colon \ Omega \ to \ mathbb {R} ^ {m}}$

${\ displaystyle f \ left (x, y, y ', y' ', \ dotsc, y ^ {(n)} \ right) = 0}$

an ordinary differential equation system -th order of equations ( here is the independent variable, etc.). In the case , this is called an ordinary -th order differential equation . ${\ displaystyle n}$${\ displaystyle m}$${\ displaystyle x}$${\ displaystyle y '= {\ tfrac {dy} {dx}}}$${\ displaystyle m = 1}$ ${\ displaystyle n}$

Their solutions are sometimes differentiable functions , which satisfy the differential equation on an interval to be determined . Are you looking for a special solution, which is given and additionally ${\ displaystyle n}$${\ displaystyle y \ colon I \ to \ mathbb {R} ^ {m}}$${\ displaystyle I \ subset \ mathbb {R}}$${\ displaystyle x_ {0} \ in I}$${\ displaystyle y_ {0}, \ dotsc, y_ {n-1} \ in \ mathbb {R} ^ {m}}$

${\ displaystyle y (x_ {0}) = y_ {0}, \; y '(x_ {0}) = y_ {1}, \ dotsc, y ^ {(n-1)} (x_ {0}) = y_ {n-1}}$

is fulfilled, this is called the initial value problem .

Can the differential equation be solved for the highest occurring derivative and thus has the form

${\ displaystyle y ^ {(n)} = f \ left (x, y, y ', y' ', \ dotsc, y ^ {(n-1)} \ right)}$,

so it is called explicit , otherwise implicit ; see also the theorem of the implicit function .

In the literature on ordinary differential equations, two different notations are used as standard. In one variation, the independent variable with called and the derivatives of the function by using , etc. The other school uses a going back to Newton notation. The independent variable is already given a meaning; is the time. Solutions are then often marked with and the derivatives according to time are noted as. Since this article was edited by representatives of both schools, both notations can be found again. ${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle x}$${\ displaystyle y ', y' '}$${\ displaystyle t}$${\ displaystyle t}$${\ displaystyle x}$${\ displaystyle {\ dot {x}}, {\ ddot {x}}}$

### Existence and uniqueness

A few criteria can be used to determine whether a solution exists at all. The differential equation itself is generally insufficient to uniquely determine the solution.

For example, the basic movement sequence of all swinging pendulums is the same and can be described by a single differential equation. The actual course of movement is, however, determined by the boundary or initial condition (s) (when was the pendulum pushed and how large is the initial deflection).

The local solvability of initial value problems for ordinary first-order differential equations is described by Picard-Lindelöf's theorem and Peano's theorem . In a second step, the existence of a local solution can be used to infer the existence of a non-continuable solution . With the help of the theorem of the maximum existence interval, building on this non-continuable solution, one can then occasionally prove globality. The uniqueness is obtained as an application of Gronwall's inequality .

### Reduction of higher order equations to first order systems

Ordinary differential equations of any order can always be reduced to a system of ordinary first-order differential equations. If an ordinary differential equation has the order , the interdependent functions are introduced : ${\ displaystyle n}$${\ displaystyle y_ {1}, y_ {2}, \ dotsc, y_ {n}}$

{\ displaystyle {\ begin {aligned} y_ {1} &: = y \\ y_ {2} &: = y '_ {1} \\ y_ {3} &: = y' _ {2} \\ & \ vdots \\ y_ {n} &: = y '_ {n-1} \\\ end {aligned}}}

The explicit differential equation -th order for becomes: ${\ displaystyle n}$${\ displaystyle y}$

${\ displaystyle y '_ {n} = f (x, y_ {1}, \ ldots, y_ {n})}$

So we get a system of ordinary first order differential equations: ${\ displaystyle n}$

${\ displaystyle (y_ {1}, \ dotsc, y_ {n-1}, y_ {n}) '= (y_ {2}, \ dotsc, y_ {n}, f (x, y_ {1}, \ ldots, y_ {n}))}$.

Conversely, one can derive a single higher-order differential equation from some differential equation systems.

## Examples

${\ displaystyle {\ dot {N}} \ sim N}$
This means that with a set of unstable atoms the number of decaying atoms depends proportionally on the total number of atoms present.${\ displaystyle N}$
${\ displaystyle m \ cdot {\ ddot {\ vec {r}}} (t) = {\ vec {F}} \ left ({\ vec {r}} (t), t \ right)}$
By knowing the force that depends on the time and position of a particle , these equations make statements about the movement of the particle itself.${\ displaystyle t}$${\ displaystyle r}$${\ displaystyle F}$
• In addition to the simple interrelationships of changes in a single variable, predictions can also be made about several variables in one system. Roughly the Lotka-Volterra equations of ecology :
${\ displaystyle {\ dot {r}} = Z_ {r} rb-M_ {r} r}$
${\ displaystyle {\ dot {b}} = Z_ {b} b-M_ {b} rb}$
This system describes the temporal change in the predator population and the prey population with constant natural birth rates and death rates . Some important properties of this model can be summarized in the form of the so-called Lotka-Volterra rules . This and similar systems are widely used in theoretical biology to describe spreading processes and in epidemic models.${\ displaystyle r}$${\ displaystyle b}$ ${\ displaystyle Z}$ ${\ displaystyle M}$

## Special types of differential equations

The best-known type of ordinary differential equations is the linear differential equation -th order with: ${\ displaystyle n}$

${\ displaystyle \ sum _ {i = 0} ^ {n} a_ {i} (x) y ^ {(i)} (x) = b (x)}$for steady .${\ displaystyle a_ {i} \ colon \ mathbb {R} \ rightarrow \ mathbb {R}}$

Other important types of ordinary differential equations are the following:

${\ displaystyle y (x) = xg (y '(x)) + f (y' (x)) \,}$.
${\ displaystyle y '(x) = f (x) y (x) + g (x) y ^ {\ alpha} (x) \,}$with .${\ displaystyle \ alpha \ neq 1}$
${\ displaystyle p \ left (x, y (x) \ right) + q (x, y (x)) y '(x) = 0 \,}$, where the vector field has a potential function.${\ displaystyle (p, q)}$
${\ displaystyle y '(x) = f \ left ({\ frac {ax + by (x) + c} {\ alpha x + \ beta y (x) + \ gamma}} \ right)}$.
${\ displaystyle \ y '(x) = A (x) y (x) + b (x)}$for steady and .${\ displaystyle A \ colon \ mathbb {R} \ rightarrow \ mathbb {R} ^ {m \ times m}}$${\ displaystyle b \ colon \ mathbb {R} \ rightarrow \ mathbb {R} ^ {m} \,}$
${\ displaystyle y '(x) = f (x) y ^ {2} (x) + g (x) y (x) + h (x) \,}$.
${\ displaystyle y '(x) = f \ left (y (x) \ right) g (x) \,}$.

## Autonomous systems

A system of differential equations is called autonomous or time-invariant if the descriptive equation does not depend on the independent variable . Ie if the system is of the form ${\ displaystyle x}$

${\ displaystyle f \ left (y, y ', y' ', \ dotsc, y ^ {(n)} \ right) = 0}$

is.

A system of differential equations

${\ displaystyle y '= f (y) \, \ y (0) = y_ {0} \ in \ mathbb {R} ^ {m}}$

means complete if the global solution is completely defined and unambiguous for each initial value . This is e.g. B. the case when linearly bounded and Lipschitz continuous . It denotes this (uniquely determined global) solution. Then we call the flow the differential equation , and then we build a dynamic system . ${\ displaystyle y_ {0}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle f \ colon \ mathbb {R} ^ {m} \ rightarrow \ mathbb {R} ^ {m}}$${\ displaystyle \ varphi (\ cdot, y_ {0})}$${\ displaystyle \ varphi: \ mathbb {R} \ times \ mathbb {R} ^ {m} \ to \ mathbb {R} ^ {m}}$${\ displaystyle y '= f (y)}$${\ displaystyle (\ mathbb {R}, \ mathbb {R} ^ {m}, \ varphi)}$

The case of flat autonomous systems is particularly easy to analyze . With the help of Poincaré-Bendixson's theorem one can often prove the existence of periodic solutions. The Lotka-Volterra model forms an important level autonomous system . ${\ displaystyle n = 1, \ m = 2}$

Since the Poincaré-Bendixson theory is based centrally on the Jordanian curve theorem , higher-dimensional analogues are wrong. In particular, it is very difficult to find periodic solutions of higher-dimensional autonomous systems.

## Solving method for linear ordinary differential equations with constant coefficients

Many dynamic systems from technology, nature and society can be described by ordinary differential equations. Many physical problems that appear very different at first glance can be represented formally identically with the GDGL.

A dynamic system is a functional unit for processing and transmitting signals, with the input variable being defined as the cause and the output variable as the result of the system's transmission behavior over time. If the input variable is a homogeneous GDGL, otherwise it is an inhomogeneous GDGL. ${\ displaystyle u (t)}$${\ displaystyle y (t)}$${\ displaystyle u (t) = 0}$

A dynamic system behaves linearly if the effects of two linearly superimposed input signals are linearly superimposed in the same way at the output of the system. A linear GDGL contains the function you are looking for and its derivatives only in the first power. There must be no products of the function sought and its derivatives. The function you are looking for must not appear in the arguments of trigonometric functions, logarithms, etc.

A well-known example from mechanics is the linear GDGL of the second order of a damped spring pendulum with the spring stiffness , mass and damping constant . The input variable is: the force , the output variable is the path . ${\ displaystyle c}$${\ displaystyle m}$ ${\ displaystyle d}$${\ displaystyle F}$${\ displaystyle x}$

${\ displaystyle {\ ddot {x}} (t) + {\ frac {d} {m}} \ cdot {\ dot {x}} (t) + {\ frac {c} {m}} \ cdot x (t) = {\ frac {1} {m}} \ cdot F (t)}$.

Linear time-invariant systems can be calculated using the following methods:

### Solution using the exponential approach

The solution of an inhomogeneous GDGL consists of the general solution of the homogeneous GDGL and a special solution (particulate solution) of the inhomogeneous GDGL. Therefore, the solution process of the inhomogeneous GDGL takes place in two stages, regardless of the order. The overall solution is the sum of the two solutions:

• The homogeneous solution of the GDGL describes the system behavior with initial values ​​of the system memory at the time and the input signal . For the dynamic system this means that it is left to itself and has only one output signal. The homogeneous solution of the GDGL is zero if all initial conditions of and their derivatives are zero.${\ displaystyle t = 0}$${\ displaystyle u (t) = 0}$${\ displaystyle y_ {0} = 0}$
• The particular solution of the GDGL describes the transfer behavior of for as forced movement. Depending on the system order, all initial conditions and their derivatives must be zero.${\ displaystyle y (t)}$${\ displaystyle u (t) \ neq 0}$${\ displaystyle y_ {0} = 0}$
If the transfer function is given as a Laplace transformed GDGL, the calculation of the system output signal for a given input signal is always a particular solution when the inverse Laplace transform is used. The particular solution of the GDGL is usually of primary interest in control engineering.${\ displaystyle G (s)}$${\ displaystyle Y (s)}$${\ displaystyle U (s)}$

With the help of the exponential approach and the resulting characteristic equation , higher-order GDGL can also be solved. This exponential approach is considered a universal solution method for homogeneous GDGLs of any order with constant coefficients.

If a GDGL has order n, then its solution has n constants of integration. For this, n initial conditions must be given.

The Exponentialansatz provides derivatives of the form: . ${\ displaystyle y (t) = e ^ {\ lambda \ cdot t}}$${\ displaystyle y ^ {(n)} (t) = \ lambda ^ {n} \ cdot e ^ {\ lambda \ cdot t}}$

If these relationships are inserted into the homogeneous GDGL, the characteristic equation arises as a polynomial of the nth order for . The homogeneous solution of an inhomogeneous differential equation is thus generally for the case of real unequal zeros of the characteristic polynomial : ${\ displaystyle \ lambda}$${\ displaystyle \ lambda _ {i}}$

${\ displaystyle y_ {H} (t) = C_ {1} \ cdot e ^ {{\ lambda} _ {1} \ cdot t} + C_ {2} \ cdot e ^ {{\ lambda} _ {2} \ cdot t} + \ cdots + C_ {n} \ cdot e ^ {{\ lambda} _ {n} \ cdot t}}$

A GDGL is solved through integration. Every integration results in integration constants, the number of which is determined by the order of the GDGL. The solution of a GDGL of the n-th order contains independent constants of integration. These are to be determined for a special (particular) solution of the GDGL depending on the eigenvalues ​​and given initial conditions of the transmission system. ${\ displaystyle C_ {i}}$${\ displaystyle n}$

The determination of the integration constants for systems of higher order (> 2) is very laborious. Further information can be found in the specialist literature. ${\ displaystyle C_ {i}}$

### Initial value problem and integration constants for a homogeneous 2nd order GDGL

A homogeneous GDGL of the nth order has n initial values. For the homogeneous ODE of second order with two presettable initial values and the coefficients and are calculated when the roots of the characteristic polynomial are known. ${\ displaystyle y_ {0}}$${\ displaystyle {\ dot {y}} _ {0}}$${\ displaystyle C_ {1}}$${\ displaystyle C_ {2}}$

For each initial condition there is an equation ( , ). ${\ displaystyle y_ {H} (0) = y_ {0}}$${\ displaystyle {\ dot {y}} _ {H} (0) = {\ dot {y}} _ {0}}$

Example of a homogeneous GDGL with two real zeros and and initial values ; : ${\ displaystyle \ lambda _ {1} = - 0 {,} 5}$${\ displaystyle \ lambda _ {2} = - 1}$${\ displaystyle {\ dot {y}} _ {0} = 1}$${\ displaystyle y_ {0} = 1}$

Solution of the homogeneous DGL 2nd order: ${\ displaystyle y_ {H} (t)}$

${\ displaystyle y_ {H} (t) = C_ {1} \ cdot e ^ {- 0 {,} 5 \ cdot t} + C_ {2} \ cdot e ^ {- 1 \ cdot t}}$

Calculation of the coefficients:

${\ displaystyle y_ {0} \ {\ xrightarrow {t = 0}} 1 = C_ {1} + C_ {2}}$
${\ displaystyle {\ dot {y}} _ {0} {\ xrightarrow {d / dt}} 1 = C_ {1} \ cdot -0 {,} 5 \ cdot e ^ {- 0 {,} 5 \ cdot t} + C_ {2} \ cdot -1 \ cdot e ^ {- 1 \ cdot t}}$
${\ displaystyle {\ dot {y}} _ {0} {\ xrightarrow {t = 0}} 1 = C_ {1} \ cdot -0 {,} 5 + C_ {2} \ cdot -1 = -0 { ,} 5 \ cdot C_ {1} -C_ {2}}$

The coefficients and can be determined from the two equations for and for . ${\ displaystyle y_ {0}}$${\ displaystyle t = 0}$${\ displaystyle {\ dot {y}} _ {0}}$${\ displaystyle t = 0}$${\ displaystyle C_ {1}}$${\ displaystyle C_ {2}}$

Note: The derivation of${\ displaystyle d / dt}$${\ displaystyle e ^ {- a \ cdot t} = - a \ cdot e ^ {- a \ cdot t}}$

Table: The different types of solutions of the quadratic equation , due to the size of the discriminant , result in three different cases of the eigenvalues λ of the GDGL such as:

 Solution of the homogeneous linear differential equation of the 2nd order with constant coefficients zeropoint Initial value problem determination C1, C2 Radicand > 0: 2 real zeros ${\ displaystyle y_ {H} (t) = C_ {1} \ cdot e ^ {{\ lambda} _ {1} \ cdot t} + C_ {2} \ cdot e ^ {{\ lambda} _ {2} \ cdot t}}$ ${\ displaystyle {\ lambda} _ {1; 2} = - {\ frac {a_ {1}} {2}} \ pm {\ sqrt {{\ frac {{a_ {1}} ^ {2}} { 4}} - a_ {0}}}}$ ${\ displaystyle C_ {1} = {\ frac {{\ dot {y}} _ {0} -y_ {0} \ cdot \ lambda _ {2}} {\ lambda _ {1} - \ lambda _ {2 }}} \,}$ ${\ displaystyle C_ {2} = y_ {0} -C_ {1} \,}$ Radicand = 0: 2 equal zeros ${\ displaystyle y_ {H} (t) = C_ {1} \ cdot e ^ {{\ lambda} _ {1} \ cdot t} + C_ {2} \ cdot t \ cdot e ^ {{\ lambda} _ {1} \ cdot t}}$ ${\ displaystyle \ lambda = \ lambda _ {1; 2} = - {\ frac {a_ {1}} {2}}}$ ${\ displaystyle C_ {1} = y_ {0} \,}$ ${\ displaystyle C_ {2} = {\ dot {y}} _ {0} -y_ {0} \ cdot \ lambda \,}$ Radicand <0: conjugate complex zeros ${\ displaystyle y_ {H} (t) = e ^ {\ alpha \ cdot t} \ cdot [C_ {1} \ cdot \ cos (\ beta \ cdot t) + C_ {2} \ cdot \ sin (\ beta \ cdot t)]}$ ${\ displaystyle {\ lambda} _ {1; 2} = \ alpha \ pm i \ cdot \ beta}$ ${\ displaystyle \ alpha = -a_ {1} / 2 \,}$ ${\ displaystyle \ beta = {\ sqrt {a_ {0} - {\ frac {{a_ {1}} ^ {2}} {4}}}}}$ ${\ displaystyle C_ {1} = {\ frac {{\ dot {y}} _ {0} - \ alpha \ cdot y_ {0}} {\ beta}}}$ ${\ displaystyle C_ {2} = y_ {0} \,}$

### Calculation example of the solution of a 2nd order GDGL with real zeros

* Homogeneous solution of the GDGL of a series connection of two PT1 elements with initial values.
* Particular solution of the GDGL for an entry jump.
 Transfer function of a dynamic system consisting of two PT1 elements ${\ displaystyle G (s) = {\ frac {Y (s)} {U (s)}} = {\ frac {1} {(2 \ cdot s + 1) \ cdot (s + 1)}} = {\ frac {1} {2 \ cdot s ^ {2} +3 \ cdot s + 1}}}$ Associated system-describing GDGL: ${\ displaystyle 2 \ cdot {\ ddot {y}} (t) +3 \ cdot {\ dot {y}} (t) + y (t) = u (t)}$ The highest derivation is optional: ${\ displaystyle {\ ddot {y}} (t) +1 {,} 5 \ cdot {\ dot {y}} (t) +0 {,} 5 \ cdot y (t) = 0 {,} 5 \ cdot u (t) \ qquad {\ text {coefficients:}} \ a_ {1} = 1 {,} 5 \ quad a_ {0} = 0 {,} 5 \ quad b_ {0} = 0 {,} 5 }$ Given: Randomly selected initial values of the energy storage (integrators) ; ;${\ displaystyle y '_ {0} (t) = 1}$${\ displaystyle y_ {0} (t) = 1}$ Given: input variable is a standardized step function for .${\ displaystyle u (t) = 1}$${\ displaystyle t> 0}$ Wanted: Homogeneous solution of the GDGL and particulate solution :${\ displaystyle y_ {H} (t)}$${\ displaystyle y_ {p} (t)}$ For the homogeneous solution is set.${\ displaystyle u (t) = 0}$ Calculated according to the table above for the homogeneous solution: There are two real zeros: ${\ displaystyle {\ lambda} _ {1} = - 0 {,} 5; \ quad {\ lambda} _ {2} = - 1}$ Calculated: The integration constants are calculated according to the table with ; .${\ displaystyle C_ {1} = 4}$${\ displaystyle C_ {2} = - 3}$ Analytical homogeneous solution according to the table for two real zeros: ${\ displaystyle y_ {H} (t) = C_ {1} \ cdot e ^ {{\ lambda} _ {1} \ cdot t} + C_ {2} \ cdot e ^ {{\ lambda} _ {2} \ cdot t} \ qquad \ to}$ it follows: With the numerical values ​​used, the analytical solution of the homogeneous GDGL is: ${\ displaystyle {\ underline {\ underline {y_ {H} (t) = 4 \ cdot e ^ {- 0 {,} 5 \ cdot t} -3 \ cdot e ^ {- t}}}}}$ Particulate solution: The calculation of the system response of the input-output behavior via the convolution integral is complex.${\ displaystyle y_ {p} (t)}$ The solution is simpler - as shown below - by using the Laplace transform.

### Solution using the transfer function

The general form of a differential equation with constant coefficients of the output quantity and with the input quantity is: ${\ displaystyle a_ {i}}$${\ displaystyle y (t)}$${\ displaystyle b_ {i}}$${\ displaystyle u (t)}$

${\ displaystyle a_ {n} \ cdot y ^ {(n)} (t) + \ \ cdots \ + a_ {2} \ cdot {\ ddot {y}} (t) + a_ {1} \ cdot {\ dot {y}} (t) + a_ {0} \ cdot y (t) = b_ {0} \ cdot u (t) + b_ {1} \ cdot {\ dot {u}} (t) + b_ { 2} \ cdot {\ ddot {u}} (t) + \ \ cdots \ + b_ {m} \ cdot u ^ {(m)} (t)}$.

Using the Laplace differentiation theorem of a GDGL results in algebraic equations with so-called numerator and denominator polynomials. is the complex Laplace variable that has an exponent instead of the order of a derivative. The transfer function is defined as the ratio of the output signal to the input signal , where the initial values ​​of the system are zero. ${\ displaystyle s = \ delta + j \ cdot \ omega}$ ${\ displaystyle G (s)}$${\ displaystyle Y (s)}$${\ displaystyle U (s)}$

${\ displaystyle G (s) = {\ frac {Y (s)} {U (s)}} = {\ frac {b_ {m} s ^ {m} + \ dotsb + b_ {2} s ^ {2 } + b_ {1} s + b_ {0}} {a_ {n} s ^ {n} + \ dotsb + a_ {2} s ^ {2} + a_ {1} s + a_ {0}}}}$.

The calculation of the time behavior of a transmission system from the transmission function is usually carried out for standardized input signals . To calculate the step response with the input signal , the term is multiplicatively appended to the transfer function . If the latter is not carried out, the impulse response is obtained instead of the step response. ${\ displaystyle G (s)}$${\ displaystyle U (s)}$${\ displaystyle u (t) = 1 (t): = U (s): = {\ frac {1} {s}}}$${\ displaystyle {\ frac {1} {s}}}$

Transfer function in polynomial representation, pole zero representation and time constant representation:

The poles and zeros of the transfer function are the most important parameters of the system behavior. The poles (zeros of the denominator polynomial) are at the same time the solution of the system and determine the system's time behavior. The zeros of the numerator polynomial only influence the amplitudes of the system response.

{\ displaystyle {\ begin {aligned} G (s) & = {\ frac {Y (s)} {U (s)}} = {\ frac {b_ {m} s ^ {m} + \ dotsb + b_ {1} s + b_ {0}} {a_ {n} s ^ {n} + a_ {n-1} s ^ {n-1} + \ dotsb + a_ {1} s + a_ {0}}} \\\\ & = k \ cdot {\ frac {(s-s_ {n1}) (s-s_ {n2}) \ dotsm (s-s_ {nm})} {(s-s_ {p1}) s-s_ {p2}) \ dotsm (s-s_ {pn})}} = K \ cdot {\ frac {(T_ {v1} \ cdot s + 1) (T_ {v2} \ cdot s + 1) \ cdots} {(T_ {1} \ cdot s + 1) (T_ {2} \ cdot s + 1) \ cdots}} \ end {aligned}}}

The solution is achieved through partial fraction decomposition of the product representation into simple additive terms that can easily be transformed into the time domain. The partial fraction decomposition of higher-order transfer functions is not always easy, especially when there are complex conjugate zeros.

Alternatively, Laplace transform tables can be used, which contain the most common corresponding equations in the time domain.

#### Particulate solution of the 2nd order GDGL with the help of the Laplace transform

The particular solution describes the transmission behavior of the system as a function of the input signal and is usually of primary interest. The initial conditions and have the value 0. ${\ displaystyle u (t)}$${\ displaystyle y_ {0}}$${\ displaystyle {\ dot {y}} _ {0}}$

Solution of the given GDGL 2nd order:

${\ displaystyle {\ ddot {y}} (t) + a_ {1} \ cdot {\ dot {y}} (t) + a_ {0} \ cdot y (t) = b_ {0} \ cdot u ( t)}$.

The transfer function of a system arises according to the differentiation theorem by exchanging the time-dependent terms of a GDGL with the Laplace transform. The prerequisite is that the initial condition of the system is zero. Depending on the degree of derivatives of a function y (t), the following Laplace transforms Y (s) arise after the transformation:

${\ displaystyle s ^ {2} \ cdot Y (s) + a_ {1} \ cdot s \ cdot Y (s) + a_ {0} \ cdot Y (s) = b_ {0} \ cdot U (s) }$

With the transformed terms, the transfer function of the dynamic system G (s) can be set up:

${\ displaystyle G (s) = {\ frac {Y} {U}} (s) = {\ frac {b_ {0}} {s ^ {2} + a_ {1} \ cdot s + a_ {0} }}}$

Polynomials of a transfer function are broken down into linear factors (basic polynomials: monomial, binomial and trinomial) by determining the zeros. If numerical values ​​of the coefficients of a 2nd order transfer function are available, the poles (= zeros in the denominator of the transfer function) can be determined using the well-known formula for solving a mixed-quadratic equation.

Due to the different ways of solving the poles due to the size of the radicand of the quadratic equation, there are three different cases of the eigenvalues (of the poles ) of the transfer function. The following is a table of correspondence between the s-area and the time area for a transformed input jump . ${\ displaystyle s_ {i}}$${\ displaystyle s_ {pi}}$${\ displaystyle Y (s) = U (s) \ cdot G (s)}$${\ displaystyle y (t)}$${\ displaystyle u (t) = 1 (t) \ to \ U (s): = 1 / s}$

The following basic polynomials (binomials and trinomials for complex conjugate poles) arise depending on the zeros. The solutions of the transfer functions as step responses in the time domain have been taken from a Laplace transformation table:

The Laplace transformation tables can be listed in two forms of product representation, whereby different factors a 0 and K must be taken into account. The conversion of the pole zero representation into the time constant representation is simple, they are algebraically identical. . ${\ displaystyle T_ {i} = 1 / {s_ {pi}}}$

Pole-zero representation (stable system) and time constant representation:

${\ displaystyle G (s) = {\ frac {a_ {0}} {(s + s_ {p1}) \ cdot (s + s_ {p2})}} = {\ frac {K} {(T_ {1 } \ cdot s + 1) \ cdot (T_ {2} \ cdot s + 1)}}}$

Table: Calculation of the step responses of a 2nd order transmission system depending on the types of poles:${\ displaystyle y (t)}$

Step responses PT2 element:
* 1): 2 real poles,
* 2): 2 conjugate complex poles.
f (s)
2nd order transfer function
input jump u (t) = 1: = multiplication by 1 / s
f (t)
Particulate solution
Step response in the time domain
Determination of the poles and from the polynomial representation ${\ displaystyle s_ {p1}}$${\ displaystyle s_ {p2}}$
2 real poles:

${\ displaystyle Y (s) = {\ frac {1} {s \ cdot (T_ {1} \ cdot s + 1) \ cdot (T_ {2} \ cdot s + 1)}}}$
${\ displaystyle y (t) = 1 - {\ frac {1} {T_ {1} -T_ {2}}} \ cdot}$
${\ displaystyle \ cdot \ left (T_ {1} \ cdot e ^ {- {\ frac {t} {T_ {1}}}} - {T_ {2} \ cdot e ^ {- {\ frac {t} {T_ {2}}}}} \ right)}$
${\ displaystyle s_ {p1; 2} = - {\ frac {a_ {1}} {2}} \ pm {\ sqrt {{\ frac {{a_ {1}} ^ {2}} {4}} - a_ {0}}}}$
${\ displaystyle T_ {1} = - {\ frac {1} {s_ {p1}}} \ quad T_ {2} = - {\ frac {1} {s_ {p2}}}}$
2 identical poles:
${\ displaystyle Y (s) = {\ frac {1} {s \ cdot (T \ cdot s + 1) ^ {2}}}}$
${\ displaystyle y (t) = 1 - {\ frac {T + t} {T}} \ cdot e ^ {- {\ frac {t} {T}}}}$ ${\ displaystyle s_ {p1} = s_ {p2} = - {\ frac {a_ {1}} {2}}}$
${\ displaystyle T = - {\ frac {1} {s_ {p}}}}$
Conjugates complex poles:
${\ displaystyle Y (s) = {\ frac {{\ omega _ {0}} ^ {2}} {s \ cdot (s ^ {2} +2 \ cdot D \ cdot \ omega _ {0} \ cdot s + {\ omega _ {0}} ^ {2})}} \ quad {\ text {or:}}}$
${\ displaystyle Y (s) = {\ frac {1} {s \ cdot (T ^ {2} \ cdot s ^ {2} +2 \ cdot D \ cdot T \ cdot s + 1)}}}$
${\ displaystyle \ omega _ {0} = {\ text {undamped angular frequency}}, \ quad T = 1 / \ omega _ {0}}$
${\ displaystyle y (t) = 1 - {\ frac {1} {\ sqrt {1-D ^ {2}}}} \ cdot}$
${\ displaystyle \ cdot e ^ {- D \ cdot \ omega _ {0} \ cdot t} \ cdot \ sin (\ omega _ {d} \ cdot t + \ phi)}$

${\ displaystyle \ omega _ {d} = \ omega _ {0} \ cdot {\ sqrt {1-D ^ {2}}}}$
${\ displaystyle \ phi = \ arccos (D)}$

${\ displaystyle s_ {p1; 2} = - D \ cdot \ omega _ {0} \ pm j \ omega _ {0} \ cdot {\ sqrt {1-D ^ {2}}}}$

Damping D:${\ displaystyle \ quad -1

${\ displaystyle D = {\ frac {a_ {1}} {2 \ cdot {\ sqrt {a_ {0}}}}}}$

If two real zeros are inserted into the equation for , division by zero results , which is not permissible. Zeros are already considered to be "different" zeros if they differ in a theoretically infinite decimal place of a value. ${\ displaystyle f (t) \ to T_ {1} = T_ {2}}$${\ displaystyle 1 / (T_ {1} -T_ {1})}$

The overall solution of a GDGL results from the superposition of the system responses on the initial conditions and on the input signal:

${\ displaystyle y (t) = y_ {H} (t) + y_ {P} (t)}$

The particular solution of the GDGL refers to the fact that the initial values are zero and the input signal is. It can be determined from the transfer function by subjecting the differential equation to a Laplace transformation. ${\ displaystyle y_ {0}; \ {\ dot {y}} _ {0}; \ {\ ddot {y}} _ {0} \ dots}$${\ displaystyle u (t) \ neq 0}$${\ displaystyle G (s)}$

#### Calculation example of the particulate solution of a 2nd order GDGL with the Laplace transformation table

 Given: Input signal: step function .${\ displaystyle U (s) = 1 / s}$ Transfer function of the system: ${\ displaystyle G (s) = {\ frac {Y (s)} {U (s)}} = {\ frac {1} {(2 \ cdot s + 1) \ cdot (s + 1)}} \ qquad T_ {1} = 2; \, T_ {2} = 1}$ Wanted: Particulate solution for the given transfer function:${\ displaystyle y_ {P} (t)}$ Search term for the Laplace transform table: ${\ displaystyle {Y (s)} = {\ frac {U (s)} {(2 \ cdot s + 1) \ cdot (s + 1)}} = {\ frac {1} {s \ cdot (2 \ cdot s + 1) \ cdot (s + 1)}} \ qquad T_ {1} = 2; \, T_ {2} = 1}$ Calculated: The analytical equation found for the particular solution according to the transformation table by entering the coefficients is:${\ displaystyle f (t)}$ ${\ displaystyle y_ {p} (t) = 1 - {\ frac {1} {T_ {1} -T_ {2}}} \ cdot [T_ {1} \ cdot e ^ {- {\ frac {t} {T_ {1}}}} - T_ {2} \ cdot e ^ {- {\ frac {t} {T_ {2}}}}]}$. Numerical values ​​of the time constants used: ${\ displaystyle {\ underline {\ underline {y_ {p} (t) = \ 1- [2 \ cdot {e ^ {- {\ frac {t} {2}}} - e ^ {- t}}] }}}}$. For a graphic representation of the particulate solution, see the penultimate picture.

Note: If the output variable of a transmission system contains vibration components, the transformation tables result in complex trigonometric equations.

## software

Some CAS can solve differential equations, e.g. B .:

## literature

• Herbert Amann: Ordinary differential equations , 2nd edition, Gruyter - de Gruyter textbooks, Berlin New York, 1995, ISBN 3-11-014582-0
• Bernd Aulbach: Ordinary differential equations , Elsevier Spectrum Akademischer Verlag, Heidelberg, 2004, ISBN 3-8274-1492-X
• Martin Hermann: Numerics of ordinary differential equations, initial and boundary value problems , Oldenbourg Verlag, Munich and Vienna, 2004, ISBN 3-486-27606-9
• Harro Heuser: Ordinary differential equations , Teubner, March 2004, ISBN 3-519-32227-7
• Edward Lincey Ince: The Integration of Ordinary Differential Equations , Dover Publications, 1956, ISBN 0-486-60349-0
• Wolfgang Walter: Ordinary differential equations , Springer, 2000, ISBN 3-540-67642-2

## Individual evidence

1. May-Britt Kallenrode, University of Osnabrück, Department of Physics: Lecture notes “Mathematics for Physicists”, Chapter: “Ordinary differential equations”, 611 pages, issued in 2007.
2. Oliver Nelles, University of Siegen: Lecture Concept Measurement and Control Engineering I, Chapter: "Laplace Transformation", 446 pages from October 8, 2009.
3. ^ Expression in Bar. Retrieved May 17, 2020 .
4. ^ Dsolve - Maple Programming Help. Retrieved May 17, 2020 .
5. Basic Algebra and Calculus - Sage Tutorial v9.0. Retrieved May 17, 2020 .
6. ^ Symbolic algebra and Mathematics with Xcas. Retrieved May 17, 2020 .