Linear system of equations

from Wikipedia, the free encyclopedia

In linear algebra, a linear system of equations ( LGS for short ) is a set of linear equations with one or more unknowns that should all be satisfied at the same time.

A corresponding system for three unknowns looks, for example, as follows:

For all three equations are fulfilled, it is a solution of the system. In contrast to the solution of a single equation (consisting of a single number), a solution must consist of an n-tuple , in this case a number triple. This is also known as the solution vector .

In general, a linear system of equations with equations and unknowns can always be expressed in the following form:

Systems of linear equations are called homogeneous if all are equal to 0 , otherwise inhomogeneous. Homogeneous systems of equations always have at least the so-called trivial solution, in which all variables are equal to 0. In the case of inhomogeneous systems of equations, on the other hand, the case may arise that no solution exists at all.

example

The graphs of the question that intersect at point A (46; 16).

Linear systems of equations often arise as models of practical tasks. A typical example from school mathematics is as follows:

“A father and a son are 62 years old together. Six years ago the father was four times as old as the son then. How old is everyone? "

It can also be described by the following system of linear equations:

The variable here represents the age of the father and the variable that of the son. In a first step, the system of equations is usually brought into a standard form in which there are only terms with variables on the left and the pure numbers on the right. In this example, the second equation is multiplied out and rearranged.

To solve this system of equations, a variety of solution methods ( see Solving Methods ) can be used. The addition method is used here as an example . To first eliminate the variable , the first equation is subtracted from the second.

The resulting equation is solved for the variable by dividing both sides by . That gives the age of the son, who is 16 years old. This value for is put back into the first equation.

Solving the equation for the variable can calculate the age of the father, who is 46 years old.

Matrix shape

For the treatment of linear systems of equations it is useful to combine all coefficients into a matrix of the so-called coefficient matrix :

Furthermore, all unknowns and the right-hand side of the system of equations can be combined to form single-column matrices (these are column vectors ):

This means that a linear system of equations using matrix-vector multiplication is short

Both the coefficients , the unknowns and those come from the same body . In particular,

and

It is not necessary to specify the unknowns to define a linear system of equations. It is sufficient to specify the extended coefficient matrix that results when a column with the right side of the system of equations is added to the coefficient matrix:

Solvability

A vector is a solution of the linear system of equations if holds. Whether and how many solutions a system of equations has is different. In the case of linear systems of equations over an infinite body , three cases can arise:

  • The system of linear equations has no solution; that is, the solution set is the empty set.
  • The system of linear equations has exactly one solution, i. That is, the solution set contains exactly one element.
  • The system of linear equations has an infinite number of solutions. In this case, the solution set contains an infinite number of n-tuples that satisfy all equations of the system.

Over a finite field, the number of solutions is a power of the power of .

Solvability criteria

A linear system of equations is solvable if the rank of the coefficient matrix is equal to the rank of the augmented coefficient matrix is ( set of Kronecker Capelli ). If the rank of the coefficient matrix is ​​equal to the rank of the extended coefficient matrix and also equal to the number of unknowns, the system of equations has exactly one solution.

In the case of a quadratic system of equations, i.e. in the case (see below), the determinant provides information about the solvability. The system of equations can be solved uniquely if and only if the value of the determinant of the coefficient matrix is ​​not equal to zero. However, if the value is zero, the solvability depends on the values ​​of the secondary determinants. In each of these, one column of the coefficient matrix is ​​replaced by the column on the right-hand side (the vector  ). Only if all secondary determinants have the value zero can the system have an infinite number of solutions, otherwise the system of equations is unsolvable.

In particular, systems of equations with more equations than unknowns, so-called overdetermined systems of equations , often have no solution. For example, the following system of equations has no solution because both equations cannot be satisfied:

Approximate solutions of overdetermined systems of equations are then mostly defined and determined using the adjustment calculation .

That a linear system of equations has an infinite number of solutions can only happen if there are fewer linearly independent equations than unknowns and the underlying body contains an infinite number of elements. For example, the following system of equations (consisting of only one equation) has an infinite number of solutions, namely all vectors with

Solution set

The solution set of a linear system of equations consists of all vectors for which is fulfilled:

If there is a homogeneous linear system of equations, its solution set forms a subspace of. Thus, the superposition property applies , according to which for one or more solutions their linear combinations (with any ) solutions of the system of equations also apply . The solution set is therefore also called the solution space and is identical to the core of the matrix. Describes the rank of the matrix,  then according to the rank theorem the dimension of the solution space is equal to the defect of the matrix   

If the solution set of an inhomogeneous system of linear equations is not empty, then it is an affine subspace of It then has the form where is the solution space of the associated homogeneous system of equations and any solution of the inhomogeneous system of equations. An inhomogeneous system of equations can therefore be solved uniquely if the zero vector is the only solution (“trivial solution”) of the homogeneous system of equations. In particular, either or with applies

The solution set of a linear system of equations does not change if one of the three elementary line transformations is carried out:

  1. Swap two lines
  2. Multiply a row by a number other than zero
  3. Add a row (or a multiple of a row) to another row

The set of solutions of a quadratic linear system of equations does not change even if the system of equations is multiplied by a regular matrix .

Determination via the extended coefficient matrix

The form of the solution set can basically be determined with the help of the extended coefficient matrix, in which it is brought into step form with the help of elementary line transformations (see Gauss method ):

In order to always get exactly this form, you sometimes have to swap columns. Swap columns change the order of the variables, which you have to consider at the end. It is also assumed here that the coefficients are not zero.

The number of solutions can then be read from the:

  • If at least one of the is not equal to zero, there is no solution.
  • If all are zero (or ), then:
    • If , then the system of equations can be solved uniquely.
    • Is there are infinite solutions. The solution space has the dimension .

With further elementary line transformations (see Gauss-Jordan method ) the matrix can be given the following form:

If there is a solution at all ( ), the following applies to the solution set :

Here is the vector of the free variables.

Forms of systems of equations

Systems of linear equations can exist in forms in which they can be easily solved. In many cases, any system of equations is brought into a suitable form using an algorithm in order to subsequently find a solution.

Square

We speak of a quadratic system of equations if the number of unknowns is equal to the number of equations. A system of equations of this form can be solved uniquely if the rows or columns are linearly independent (solution methods are discussed below).

Step shape, stair shape

In the step form (also line step form, line normal form, step form, graduated form, stair form, stair step form or stair normal form) the number of unknowns is reduced by at least one in each line, which then no longer occurs in the following lines. Using the Gaussian elimination method, any system of equations can be converted into this form.

Example (the coefficients of omitted elements are ):

Linear systems of equations in step form can be solved by inserting them backwards (back substitution). Starting with the last line, the unknown is calculated and the result obtained is inserted in the line above to calculate the next unknown.

Solution to the above example:

  1. Resolve the second line after
  2. Inserting in the first line:
  3. Resolve the first line after
  4. With all vectors of the form are solutions of the system of equations.

Triangular shape

The triangular shape is a special case of the step shape, in which each line has exactly one less unknown than the previous one. This means that all the coefficients of the main diagonal are different from . The triangular shape arises when the Gaussian elimination method is used if the system of equations has exactly one solution.

Example (the coefficients of omitted elements are ):

Like linear systems of equations in step form, those in triangular form can also be solved by inserting them backwards.

Reduced step shape

The reduced step form (also standardized line step form) is a special case of the step form. The first unknowns of each line appear only once and have the coefficient The reduced step form of a linear system of equations is unambiguous: there is exactly one reduced step form for every linear system of equations. Using the Gauss-Jordan algorithm , any linear system of equations can be brought into this form.

Example (the coefficients of omitted elements are ):

The solution of the linear system of equations can now be read off directly: If set and the system of equations is solved recursively, all vectors of the form result as solutions.

Other forms

In practice, the special cases of thinly populated matrices (very large matrices with relatively few non-zero elements) and ribbon matrices (also large matrices whose non-vanishing elements are concentrated around the main diagonal), which can be treated with specially adapted solution methods (see below), are relevant .

Solution method

The methods for solving systems of linear equations are divided into iterative and direct methods. Examples of direct methods are the substitution method , the equation method and the addition method for simple systems of equations as well as the Gaussian elimination method based on the addition method, which brings a system of equations into step form. A variant of the Gaussian method is the Cholesky decomposition , which only works for symmetric , positively definite matrices . The QR decomposition , which is more stable , takes twice as much effort as the Gauss method . The Cramer's rule used determinants to generate formulas for the solution of a quadratic linear system when it has a unique solution. However, it is not suitable for numerical calculation due to the high computational effort.

Iterative methods are, for example, the Gauß-Seidel and Jacobi methods belonging to the class of splitting methods . These do not converge for every matrix and are very slow for many practical problems. More modern methods are, for example, preconditioned Krylow subspace methods , which are particularly fast for large sparse matrices , as well as multi-grid methods for solving systems that come from the discretization of certain partial differential equations .

Applications (e.g. geodesy ) often take measurements of different types and, to reduce the impact of measurement errors , more measurements are taken than unknowns need to be determined. Each measurement provides an equation to determine the unknown. If these equations are not all linear, the system of equations is linearized using known approximations of the unknowns . Then instead of the actual unknowns, their small deviations from the approximate values ​​must be determined. Usually, when there are more equations than unknowns, the equations contradict each other so there is no strict solution. As a way out, a solution is then usually determined by means of an adjustment using the least squares method , which typically does not exactly satisfy any equation, but provides an optimal approximation of the “true” measured variables based on reasonable assumptions about the measurement errors.

The currently best known asymptotic upper bound of arithmetic operations for solving any linear system of equations is provided by a practically inapplicable algorithm by Don Coppersmith and Shmuel Winograd from 1990, which solves a system in O (n 2,376 ) . It is clear that at least O (n 2 ) operations are necessary; not, however, whether this lower bound can also be reached.

Almost singular linear systems of equations can be solved passably in a numerical way by singular value decomposition .

literature

  • G. Frobenius : On the theory of linear equations. In: Journal for pure and applied mathematics (= Crelle's Journal.) Vol. 129, 1905 ISSN  0075-4102 , pp. 175-180, digitized.
  • Andreas Meister: Numerics of linear systems of equations. An introduction to modern procedures. 2nd, revised edition. Vieweg, Wiesbaden 2005, ISBN 3-528-13135-7 .
  • Falko Lorenz: Linear Algebra. Volume 1, 4th edition. Spektrum Akademischer Verlag, Heidelberg et al. 2003, ISBN 3-8274-1406-7 .
  • Gerd Fischer : Linear Algebra. 15th, improved edition. Vieweg, Wiesbaden 2005, ISBN 3-8348-0031-7 .

Web links

Individual evidence

  1. ^ Gene H. Golub , Charles F. Van Loan: Matrix Computations. 3rd edition, reprint. Johns Hopkins University Press, Baltimore MD et al. 1996, ISBN 0-8018-5414-8 .