# Gaussian elimination method

The Gaussian elimination method or simply Gauss method (after Carl Friedrich Gauß ) is an algorithm from the mathematical sub-areas of linear algebra and numerics . It is an important method for solving systems of linear equations and is based on the fact that elementary transformations change the system of equations but retain the solution. This allows every uniquely solvable system of equations to be brought to a stepped form in which the solution can be easily determined by successive elimination of the unknowns or the amount of solutions can be read off.

The number of operations required at a - matrix of order . In its basic form, the algorithm is prone to rounding errors from a numerical point of view , but with small modifications ( pivoting ) it represents the standard solution method for general systems of linear equations and is part of all essential program libraries for numerical linear algebra such as NAG , IMSL and LAPACK . ${\ displaystyle n \ times n}$${\ displaystyle n ^ {3}}$

## Explanation

A linear system of equations with three equations and three unknowns and right-hand side has the form: ${\ displaystyle Ax = b}$${\ displaystyle x = (x_ {1}, ~ x_ {2}, ~ x_ {3}) ^ {T}}$${\ displaystyle b = (b_ {1}, ~ b_ {2}, ~ b_ {3}) ^ {T}}$

${\ displaystyle {\ begin {array} {rcrcrcl} a_ {11} x_ {1} & + & a_ {12} x_ {2} & + & a_ {13} x_ {3} & = & b_ {1}, \\ a_ {21} x_ {1} & + & a_ {22} x_ {2} & + & a_ {23} x_ {3} & = & b_ {2}, \\ a_ {31} x_ {1} & + & a_ {32} x_ {2} & + & a_ {33} x_ {3} & = & b_ {3}. \ end {array}}}$

The algorithm for calculating the variables , and can be divided into two stages: ${\ displaystyle x_ {1}}$${\ displaystyle x_ {2}}$${\ displaystyle x_ {3}}$

1. Forward elimination,
2. Backwards onset (back substitution).

In the first step, the system of equations is brought into step form. Stepped form means that at least one less variable occurs per line, i.e. at least one variable is eliminated . In the above system of equations , and would be eliminated, in the third line there is only the variable : ${\ displaystyle a_ {21}}$${\ displaystyle a_ {31}}$${\ displaystyle a_ {32}}$${\ displaystyle x_ {3}}$

${\ displaystyle {\ begin {array} {rcrcrcl} {\ tilde {a}} _ {11} x_ {1} & + & {\ tilde {a}} _ {12} x_ {2} & + & {\ tilde {a}} _ {13} x_ {3} & = & {\ tilde {b}} _ {1} \\ && {\ tilde {a}} _ {22} x_ {2} & + & {\ tilde {a}} _ {23} x_ {3} & = & {\ tilde {b}} _ {2} \\ &&&& {\ tilde {a}} _ {33} x_ {3} & = & {\ tilde {b}} _ {3}. \ end {array}}}$

Elementary line transformations are used to achieve the step form , with the help of which the system of equations is transformed into a new one, but which has the same set of solutions. Two types of elementary line transformation are sufficient:

1. Add a row or a multiple of a row to another row.
2. Swap two lines.

The procedure then consists in starting in the first column with transformations of the first type by skillfully adding the first row to zero all entries except the first. This is then continued in the second column modified in this way, this time multiples of the second row being added to the following rows and so on. This step only works if the diagonal element of the current column is not zero. In such a case, the second type of line reshaping is necessary, since a non-zero entry can be created on the diagonal by interchanging the lines. With the help of these two types of transformations it is possible to bring any linear system of equations into step form.

Another type of elementary reshaping is the swapping of columns. This is not required to carry out the algorithm, but is sometimes used in computer programs for reasons of stability. This changes the position of the variable in the equation system. When calculating with your head, it is sometimes useful to multiply a line by a number, for example to avoid complicated fractions. This causes additional computational effort and is therefore not an option in computer programs and also changes the determinant of the coefficient matrix, which has theoretical disadvantages.

In the second step of the procedure, the backward insertion, starting from the last line in which only one variable appears, the variables are calculated and inserted in the line above.

An alternative to this is the Gauss-Jordan algorithm , in which not only the lower parts are eliminated, but also the upper parts, so that a diagonal shape is created, in which the above-mentioned second step is then omitted.

### example

1. ${\ displaystyle x_ {1} + 2x_ {2} + 3x_ {3} = 2}$, Here: , , and${\ displaystyle a_ {11} = 1}$${\ displaystyle a_ {12} = 2}$${\ displaystyle a_ {13} = 3}$${\ displaystyle b_ {1} = 2}$
2. ${\ displaystyle x_ {1} + x_ {2} + x_ {3} = 2}$
3. ${\ displaystyle 3x_ {1} + 3x_ {2} + x_ {3} = 0}$

For better clarity, the coefficients of the linear equation system are written into the coefficient matrix expanded with : ${\ displaystyle a_ {ij}}$${\ displaystyle b}$

${\ displaystyle \ left ({\ begin {array} {ccc | c} 1 & 2 & 3 & 2 \\ 1 & 1 & 1 & 2 \\ 3 & 3 & 1 & 0 \ end {array}} \ right)}$

Now it is transformed so that and become zero by adding appropriate multiples of the first equation to the second and third equations. The corresponding multiplier is obtained by dividing the element to be eliminated (first ) by the pivot element (here: and ). Since the two elements and should be zero, the two multipliers are each multiplied by. ${\ displaystyle a_ {21}}$${\ displaystyle a_ {31}}$${\ displaystyle a_ {21} = 1}$ ${\ displaystyle a_ {11}}$${\ displaystyle {\ tfrac {1} {1}} = 1}$${\ displaystyle {\ tfrac {3} {1}} = 3}$${\ displaystyle a_ {21}}$${\ displaystyle a_ {31}}$${\ displaystyle (-1)}$

The -fold is added to the second line and the -fold of the first line is added to the third line . To make it zero, a multiple of the second line is added to the third line, in this case -fold: ${\ displaystyle (-1)}$${\ displaystyle (-3)}$${\ displaystyle a_ {32}}$${\ displaystyle (-3)}$

If the number that is used to divide to calculate the multiplier (here the number for the first two lines, the number for the third time ) is zero, this line is swapped with one below. The last line means ${\ displaystyle 1}$${\ displaystyle -1}$

${\ displaystyle -2x_ {3} = - 6}$.

This equation is easy to solve and provides . This results in the second line ${\ displaystyle x_ {3} = 3}$

${\ displaystyle -1x_ {2} -2x_ {3} = 0}$, with so${\ displaystyle x_ {3} = 3}$${\ displaystyle x_ {2} = - 6}$

and on . With that all variables are calculated: ${\ displaystyle x_ {1} = 5}$

${\ displaystyle x_ {1} = 5}$, and .${\ displaystyle x_ {2} = - 6}$${\ displaystyle x_ {3} = 3}$

### Control by line total

The transformations can be checked by calculating the line sum.

Here, the sum of all elements of the respective row was added in the last column. For the first line is the line total . Since no transformations are carried out on the first line, its line total does not change. During the first transformation of this system of equations, the -fold of the first is added to the second line . If you do the same for the line total, then the following applies . This result is the row total of the transformed second row . In order to check the invoices, one can therefore carry out the transformations on the line total. If all invoices are correct, the line total of the transformed line must result. ${\ displaystyle 1 + 2 + 3 + 2 = 8}$${\ displaystyle (-1)}$${\ displaystyle 5 + (- 1) \ cdot 8 = -3}$${\ displaystyle -1-2 + 0 = -3}$

## Pivoting

In general, the Gaussian elimination method cannot be carried out without interchanging lines. By replacing in the above example by , the algorithm without Zeilenvertauschung can not start. To remedy this, choose an element of the first column of the coefficient matrix, the so-called pivot element , which is not equal to 0. ${\ displaystyle a_ {11} = 1}$${\ displaystyle a_ {11} = 0}$

 ${\ displaystyle x_ {1}}$ ${\ displaystyle x_ {2}}$ ${\ displaystyle x_ {3}}$ ${\ displaystyle b}$ 0 2 3 4th 1 1 1 2 3 3 1 0

Then swap the first line with the pivot line:

 ${\ displaystyle x_ {1}}$ ${\ displaystyle x_ {2}}$ ${\ displaystyle x_ {3}}$ ${\ displaystyle b}$ 1 1 1 2 0 2 3 4th 3 3 1 0

For the calculation by hand, it is helpful to choose a 1 or minus 1 as the pivot element so that there are no breaks in the further course of the process. For the calculation with the help of a computer, it makes sense to choose the element with the largest amount in order to obtain the most stable algorithm possible. If you choose the pivot element in the current column, you speak of column pivoting. Alternatively, you can also select the pivot in the current line.

 ${\ displaystyle x_ {1}}$ ${\ displaystyle x_ {2}}$ ${\ displaystyle x_ {3}}$ ${\ displaystyle b}$ 0 2 3 4th 1 1 1 2 3 3 1 0

In this case the columns are swapped accordingly.

 ${\ displaystyle x_ {2}}$ ${\ displaystyle x_ {1}}$ ${\ displaystyle x_ {3}}$ ${\ displaystyle b}$ 2 0 3 4th 1 1 1 2 3 3 1 0

When inserting backwards, it should be noted that the variables have changed their position in the system of equations. If the element of the total remaining matrix with the largest amount is selected as the pivot, one speaks of complete pivoting or total pivoting. In general, both rows and columns must be swapped around for this.

Pivoting can be carried out without significant additional effort if the entries of the matrix and the right-hand side are not swapped, but the swaps are stored in an index vector.

## LR decomposition

### Introductory example

If you want to implement the solution of a quadratic, uniquely solvable system of equations as a computer program, it is advisable to interpret the Gaussian algorithm as an LR decomposition (also called LU decomposition or triangular decomposition ). This is a decomposition of the regular matrix into the product of a lower left, standardized triangular matrix (left, or English "left", or "lower") and a right upper triangular matrix (right, or English "right", or also " upper ", and then labeled). The following example shows this: ${\ displaystyle Ax = b}$ ${\ displaystyle A}$ ${\ displaystyle L}$${\ displaystyle R}$${\ displaystyle U}$

${\ displaystyle A = {\ begin {pmatrix} 1 & 2 & 3 \\ 1 & 1 & 1 \\ 3 & 3 & 1 \\\ end {pmatrix}} = {\ begin {pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 3 & 3 & 1 \\\ end {pmatrix}} \ cdot {\ begin {pmatrix} 1 & 2 & 3 \\ 0 & -1 & -2 \\ 0 & 0 & -2 \\\ end {pmatrix}} = L \ cdot R}$

The matrix is ​​used to store the required transformation steps , which correspond to multiplications with Frobenius matrices , and has the above-mentioned step form. This shows the existence of the decomposition. For clarity, the diagonal elements of the matrix are set as 1. Saving the transformation steps has the advantage that the system of equations can be efficiently solved for different “right sides” by inserting them forwards and backwards. ${\ displaystyle L}$${\ displaystyle R}$${\ displaystyle L}$${\ displaystyle b}$

The generally required interchanges of lines can be described by a permutation matrix : ${\ displaystyle P}$

${\ displaystyle P \ cdot A = L \ cdot R.}$

### Existence proposition

For every regular matrix there is a permutation matrix , a lower, normalized triangular matrix and an upper triangular matrix , so that: ${\ displaystyle A \ in \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle P \ in \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle L \ in \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle R \ in \ mathbb {R} ^ {n \ times n}}$

${\ displaystyle P \ times A = L \ times R}$.

A permutation matrix is a matrix that is created from the identity matrix by any number of line interchanges and thus still consists of only zeros and ones. ${\ displaystyle P}$

### algorithm

The algorithm for computing the matrices for a given one is as follows. ${\ displaystyle P, L, R}$${\ displaystyle A \ in \ mathbb {R} ^ {n \ times n}}$

It will be completed (matrix transformations ). The reshaping matrices are introduced : ${\ displaystyle n}$${\ displaystyle k = 0, \ ldots, n-1}$${\ displaystyle A ^ {(k)}}$

${\ displaystyle A ^ {(k)} = \ left (a_ {ij} ^ {(k)} \ right) = {\ begin {cases} A &, \ quad k = 0 \\ (IL ^ {(k) }) P ^ {(k)} A ^ {(k-1)} &, \ quad k = 1, \ ldots, n-1. \ End {cases}}}$

New auxiliary matrices were introduced: ${\ displaystyle L ^ {(k)}, P ^ {(k)}}$

{\ displaystyle {\ begin {aligned} \ left (L ^ {(k)} \ right) _ {ij} & = {\ begin {cases} {\ frac {a_ {ik} ^ {(k-1)} } {a_ {kk} ^ {(k-1)}}} &, \ quad j = k \; \ wedge \; i> k \\ 0 &, \ quad {\ text {otherwise}} \ end {cases} } \\ P ^ {(k)} & = (e_ {1}, \ ldots, e_ {k-1}, e _ {\ hat {k}}, e_ {k + 1}, \ ldots, e _ {{ \ hat {k}} - 1}, e_ {k}, e _ {{\ hat {k}} + 1}, \ ldots, e_ {n}) \\ {\ hat {k}} & \ in \ { k, \ ldots, n \} \ quad {\ text {so that}} \ quad \ left | a _ {{\ hat {k}} k} ^ {(k)} \ right | = \ max {\ left \ lbrace \ left | a_ {ik} ^ {(k)} \ right | \ colon i = k, \ ldots, n \ right \ rbrace}. \ end {aligned}}}

Note:

• ${\ displaystyle e_ {i} \ in \ mathbb {R} ^ {n}}$represents the -th basis vector${\ displaystyle i}$
• in only one row of the identity matrix was swapped with another${\ displaystyle P ^ {(k)}}$
• ${\ displaystyle {\ hat {k}}}$must be chosen so that the maximum absolute value for all elements from the column part has${\ displaystyle a _ {{\ hat {k}} k} ^ {(k)}}$${\ displaystyle k}$

You need more auxiliary matrices : ${\ displaystyle Q ^ {(k)}}$

${\ displaystyle Q ^ {(k)} = {\ begin {cases} I &, \ quad k = n \\ P ^ {(n-1)} \ cdot \ ldots \ cdot P ^ {(k)} &, \ quad k

The desired matrices can now be specified:

{\ displaystyle {\ begin {aligned} R & = A ^ {(n)} \\ P & = Q ^ {(1)} \\ L & = \ left (I + \ sum _ {k = 1} ^ {n-1 } Q ^ {(k + 1)} L ^ {(k)} \ right). \ End {aligned}}}

### Algorithm in pseudocode

#### Without pivoting, out-of-place${\ displaystyle L, R}$

The following algorithm performs an LR decomposition of the matrix A without pivoting , by simultaneously generating L and R outside (out-of-place) of A:

   Eingabe: Matrix A

   // Initialisierung
R := A
L := E_n

   // n-1 Iterationsschritte
for i := 1 to n-1
// Zeilen der Restmatrix werden durchlaufen
for k := i+1 to n
// Berechnung von L
L(k,i) := R(k,i) / R(i,i) // Achtung: vorher Prüfung auf Nullwerte notwendig
// Spalten der Restmatrix werden durchlaufen
for j := i to n
// Berechnung von R
R(k,j) := R(k,j) - L(k,i) * R(i,j)

   Ausgabe: Matrix L, Matrix R


#### Without pivoting, in-place${\ displaystyle L, R}$

Alternatively (out of a possible interest in memory efficiency) a simultaneous development of L and R directly in A is possible (in-place), which is described by the following algorithm:

   Eingabe: Matrix A

   // n-1 Iterationsschritte
for i := 1 to n-1
// Zeilen der Restmatrix werden durchlaufen
for k := i+1 to n
// Berechnung von L
A(k,i) := A(k,i) / A(i,i) // Achtung: vorher Prüfung auf Nullwerte notwendig
// Spalten der Restmatrix werden durchlaufen
for j := i+1 to n
// Berechnung von R
A(k,j) := A(k,j) - A(k,i) * A(i,j)

   Ausgabe: Matrix A (in modifizierter Form)


#### With pivoting

The following algorithm performs an LR decomposition of the matrix with pivotization . It differs from the algorithms without pivoting only in that the lines can be swapped: ${\ displaystyle A}$

   Eingabe: Matrix ${\displaystyle A}$

   // n-1 Iterationsschritte
for k := 1 to n-1
// Pivotisierung
// finde betragsmäßig größtes Element in k-ter Spalte
${\displaystyle {\hat {k}}}$ = k
for i := k+1 to n
if < ${\displaystyle \left(|a_{ik}^{(k-1)}|>|a_{{\hat {k}}k}^{(k-1)}|\right)}$ >
${\displaystyle {\hat {k}}}$ = i
// vertausche Zeilen
<Erstelle ${\displaystyle P^{(k)}}$>
<Vertausche Zeilen k,${\displaystyle {\hat {k}}}$>
// Rest analog zu obigen Algorithmen
<Rest ${\displaystyle \ldots }$>
${\displaystyle P=P^{(n)}\cdots P^{(1)}}$

   Ausgabe: Matrix L, Matrix R, Matrix P


### Solving an LGS

The original LGS is now simplified as follows using the LR decomposition : ${\ displaystyle Ax = b}$

{\ displaystyle {\ begin {aligned} Ax = b \ quad & {\ text {and}} \ quad PA = LR \\\ Rightarrow PAx & = Pb \\\ Leftrightarrow LRx & = Pb. \ end {aligned}}}

Now define the following auxiliary variables

${\ displaystyle y: = Rx \ quad {\ text {and}} \ quad {\ hat {b}}: = Pb.}$

As a result, the LGS has changed into a simplified structure: ${\ displaystyle Ax = b}$

${\ displaystyle Ly = {\ hat {b}} \ quad {\ text {and}} \ quad Rx = y.}$

These can easily be solved by pushing forwards or backwards .

#### Bet forward

When betting forward, one calculates a solution of the linear equation system , or when calculating with pivoting of . This is related to the solution of the original system of equations via the equation . ${\ displaystyle y}$${\ displaystyle Ly = b}$${\ displaystyle Ly = Pb = {\ hat {b}}}$${\ displaystyle y = Rx}$${\ displaystyle x}$

The system of equations has the following form written out: ${\ displaystyle Ly = b}$

${\ displaystyle {\ begin {pmatrix} l_ {11} & 0 & 0 & \ ldots & 0 \\ l_ {21} & l_ {22} & 0 && \ vdots \\ l_ {31} & l_ {32} & l_ {33} & \ ddots & \ vdots \\\ vdots & \ vdots & \ vdots & \ ddots & 0 \\ l_ {n1} & l_ {n2} & l_ {n3} & \ ldots & l_ {n, n} \ end {pmatrix}} \ cdot {\ begin {pmatrix } y_ {1} \\ y_ {2} \\ y_ {3} \\\ vdots \\ y_ {n} \ end {pmatrix}} = {\ begin {pmatrix} b_ {1} \\ b_ {2} \\ b_ {3} \\\ vdots \\ b_ {n} \ end {pmatrix}}}$

The following formula then applies to the components : ${\ displaystyle y_ {i}}$

${\ displaystyle y_ {i} = {\ frac {1} {l_ {ii}}} \ left (b_ {i} - \ sum _ {k = 1} ^ {i-1} l_ {ik} \ cdot y_ {k} \ right).}$

Starting with, you can calculate one after the other using the ones you already know . ${\ displaystyle y_ {1} = {\ frac {b_ {1}} {l_ {11}}}}$${\ displaystyle y_ {1}, y_ {2}, \ ldots, y_ {n}}$${\ displaystyle y_ {i}}$

#### Insert backwards

When betting backwards, one calculates the solution of the original system of equations by solving in a manner similar to that for forward betting. The difference is that you start at and then calculate the values ​​of one after the other . The corresponding formula is ${\ displaystyle x}$${\ displaystyle Rx = y}$${\ displaystyle x_ {n} = {\ frac {y_ {n}} {r_ {nn}}}}$${\ displaystyle x_ {n-1}, x_ {n-2}, \ ldots, x_ {1}}$

${\ displaystyle x_ {i} = {\ frac {1} {r_ {ii}}} \ left (y_ {i} - \ sum _ {k = i + 1} ^ {n} r_ {ik} \ cdot x_ {k} \ right).}$

### Incomplete decompositions

The LR decomposition has the disadvantage that it is often full even with thinly populated matrices . If, instead of all entries, only those in a given occupation pattern are calculated, one speaks of an incomplete LU decomposition . This provides a favorable approximation to the matrix and can therefore be used as a preconditioner in the iterative solution of linear systems of equations. In the case of symmetrically positive definite matrices one speaks of an incomplete Cholesky decomposition . ${\ displaystyle A}$

## Properties of the procedure

### Computing effort and storage space requirements

The number of arithmetic operations for the LR decomposition is approx . The effort for inserting it forwards and backwards is quadratic ( ) and therefore negligible overall. The Gaussian elimination method is a fast, direct method for solving linear systems of equations; a QR decomposition requires at least twice as many arithmetic operations. Nevertheless, the algorithm should only be used for systems of equations of small to medium dimensions (up to about ). For matrices of higher dimensions, iterative methods are often better. These approach the solution step by step and require arithmetic operations in each step for a fully occupied matrix . The speed of convergence of such methods strongly depends on the properties of the matrix and it is difficult to predict the specific computing time required. ${\ displaystyle n \ times n}$${\ displaystyle {\ tfrac {2} {3}} n ^ {3}}$${\ displaystyle {\ mathcal {O}} (n ^ {2})}$${\ displaystyle n = 10000}$${\ displaystyle {\ mathcal {O}} (n ^ {2})}$

The calculation can be carried out on the memory of the matrix , so that apart from the storage itself, no additional memory is required . For a full matrix of the dimension one would have to store a million coefficients. This corresponds to the IEEE-754 format double in about 8 megabytes . In the case of iterative methods that work with matrix-vector multiplications , however, explicit storage may not be necessary by itself, so that these methods may be preferable. ${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle n = 1000}$${\ displaystyle A}$

For special cases, effort and storage space can be significantly reduced by using special properties of the matrix and its LR decomposition. The Cholesky decomposition for symmetric positive definite matrices requires only half the arithmetic operations and memory. Another example are band matrices with a fixed bandwidth , since here the LR breakdown maintains the band structure and the effort is reduced to. For a few special sparse matrices it is possible to use the occupation structure so that the LR decomposition also remains sparse. Both go hand in hand with a reduced memory requirement. ${\ displaystyle m}$${\ displaystyle {\ mathcal {O}} (nm ^ {2})}$

### accuracy

#### Preconditions for accuracy - procedure

In order for the calculation to be sufficiently accurate, on the one hand the condition of the matrix must not be too bad and the machine precision used must not be too low. On the other hand, you need a solution process that is sufficiently stable . A good algorithm is therefore characterized by high stability. ${\ displaystyle x}$

In general, the procedure is unstable without pivoting. Therefore, column pivoting is mostly used for the solution. This means that the method can be carried out in a stable manner for most matrices, as became particularly clear through the work of James H. Wilkinson after the Second World War. However, matrices can be specified for which the stability constant increases exponentially with the dimension of the matrix. With complete pivoting, the stability can be improved, but the effort for the pivot search increases , so this is rarely used. QR decompositions generally have better stability, but they are also more complex to calculate. ${\ displaystyle {\ mathcal {O}} (n ^ {3})}$

In the case of strictly diagonally dominant or positively definite matrices (see also Cholesky decomposition ), the Gaussian method is stable and can be carried out without pivoting, so there are no zeros on the diagonal.

#### Post-iteration

A practical approach to compensate for these computational inaccuracies consists of a post-iteration by means of a splitting method , since the LR decomposition provides a good approximation of the matrix A , which can be easily inverted. To do this, you start with the calculated solution and calculate the residual in each step${\ displaystyle x_ {0} = x}$

${\ displaystyle r_ {k} = b-Ax_ {k}.}$

Then the solution of the system of equations is calculated using the LR decomposition${\ displaystyle z_ {k}}$

${\ displaystyle Az_ {k} = r_ {k}}$

and sets

${\ displaystyle x_ {k + 1} = x_ {k} + z_ {k}.}$

Since mostly only small corrections are involved, a few iteration steps are often sufficient. In general, however, a higher level of accuracy is required to calculate the residual  . If the post-iteration is not sufficient to achieve the desired accuracy, the only option is to choose a different method or to remodel the problem in order to obtain a more favorable matrix, for example one with a lower condition. ${\ displaystyle r_ {k}}$

The post-iteration is used, for example, in the LAPACK routine DSGESV. In this routine, the LR decomposition is determined with single precision and double the precision of the solution is achieved through post-iteration with a residual calculated with double precision .

## The Gaussian method as a theoretical aid

In addition to its importance for the numerical treatment of uniquely solvable systems of linear equations, the Gauss method is also an important aid in theoretical linear algebra.

### Statements on the solvability of the linear system of equations

A system of linear equations can have one, more, or no solutions. When using complete pivoting, the Gaussian method brings each matrix to a triangular step shape, in which all entries below a certain line are zero and no zero entries appear on the diagonal. The rank of the matrix then results from the number of non-zero rows in the matrix. Solvability then results from the interaction with the right-hand side: If non-zero entries belong to the right-hand side of the zero lines, the system of equations is unsolvable, otherwise solvable, whereby the dimension of the solution set corresponds to the number of unknowns minus the rank.

example
${\ displaystyle {\ begin {array} {rcrcl} x & + & 4y & = & 8, \\ 3x & + & 12y & = & 24. \ end {array}}}$

Since the second equation is a multiple of the first equation, the system of equations has an infinite number of solutions. When x is eliminated in the second equation, it disappears completely, only the first equation remains. If you solve this for x , you can specify the solution set as a function of y :

${\ displaystyle {\ begin {array} {rcl} x & = & 8-4y \\ L & = & \ {(8-4y, y) ^ {T} | y \} \ end {array}}}$

### Determinant

The Gaussian method also provides a way of calculating the determinant of a matrix. Since the elementary line transformations have the determinant 1, except for line swaps, the determinant of which is −1 (however, this only changes the sign and can therefore be easily corrected), the resulting upper triangular matrix has the same determinant as the original matrix, but can be much simpler calculated: It is the product of the diagonal elements.

### Calculating the inverse

Another way of using the Gaussian method is to calculate the inverse of the matrix. For this purpose, the algorithm is applied to a scheme extended from the right by an identity matrix and continued after the first phase until an identity matrix is ​​reached on the left. The inverse matrix is ​​then on the right. This method is not recommended numerically and the explicit calculation of the inverse can usually be bypassed.

## history

The Chinese math book Jiu Zhang Suanshu (Eng. Nine Books of Arithmetic Technique ), which was written between 200 BC and 100 AD, already provides an exemplary but clear demonstration of the algorithm based on the solution of a system with three unknowns. In 263, Liu Hui published a comprehensive commentary on the book, which was then incorporated into the corpus. Jiu Zhang Suanshu was an essential source of mathematical education in China and surrounding countries until the 16th century .

In Europe it was not until 1759 that Joseph-Louis Lagrange published a procedure that contained the basic elements. As part of his development and application of the method of least squares, Carl Friedrich Gauß dealt with linear systems of equations, the normal equations that occur there. His first publication on the subject dates from 1810 ( Disquisitio de elementis ellipticis Palladis ), but as early as 1798 he cryptically mentioned in his diaries that he had solved the problem of elimination. What is certain is that he used the method for calculating the orbit of the asteroid Pallas between 1803 and 1809. In the 1820s he first described something like an LR decomposition. In the following years, the elimination method was mainly used in geodesy (see Gauß 'achievements ), and so the second namesake of the Gauß-Jordan method is not the mathematician Camille Jordan , but the geodesist Wilhelm Jordan .

During and after the Second World War, the study of numerical methods gained in importance and the Gaussian method was now increasingly applied to problems independent of the least squares method. John von Neumann and Alan Turing defined the LR decomposition in the form commonly used today and examined the phenomenon of rounding errors. These questions were only solved satisfactorily in the 1960s by James Hardy Wilkinson , who showed that the pivoting method is backwards stable.

## literature

• Gerd Fischer : Lineare Algebra , Vieweg-Verlag, ISBN 3-528-97217-3 .
• Gene Golub and Charles van Loan: Matrix computations . 3. Edition. Johns Hopkins University Press, Baltimore 1996, ISBN 0-8018-5414-8 (English).
• Andrzej Kielbasiński, Hubert Schwetlick: Numerical linear algebra. A computer-oriented introduction . Deutscher Verlag der Wissenschaften, Berlin 1988, ISBN 3-326-00194-0 .
• Andreas Meister: Numerics of linear systems of equations. An introduction to modern procedures . 2nd edition, Vieweg, Wiesbaden 2005, ISBN 3-528-13135-7 .