GaussSeidel method
In numerical mathematics , the GaußSeidel method or single step method (according to Carl Friedrich Gauß and Ludwig Seidel ) is an algorithm for the approximate solution of linear systems of equations . Like the Jacobi method and the SOR method , it is a special splitting method . The process was first developed by Gauß, but not published, only mentioned in a letter to Gerling in 1823. It was not published by Seidel until 1874, before Gauss knew about it.
The method was developed because the Gaussian elimination method, an exact solver, is very susceptible to calculation errors in manual calculations. An iterative approach does not have this disadvantage.
Description of the procedure
Given is a linear system of equations in variables with the equations
To solve this, an iteration process is carried out. An approximation vector to the exact solution is given. Now the th equation is solved for the th variable , whereby the previously calculated values of the current iteration step are also used, in contrast to the Jacobi method , in which only the values of the last iteration step are used. That means for the th iteration step:
The result of the calculation is a new approximation vector for the desired solution vector . If one repeats this process, one obtains a sequence of values that approach the solution vector more and more in the case of convergence:
The GaussSeidel method is inherently sequential, that is, before an equation can be solved, the results of the previous equations must be available. It is therefore not suitable for use on parallel computers.
The algorithm sketch with termination condition results:

repeat

for up
 next
 to
The first assignment of the variable vector , which can be chosen arbitrarily, and an error limit were assumed as input variables of the algorithm; the approximate solution is the vectorial return variable. The error barrier measures the size of the last change in the variable vector. As a condition for the feasibility of the algorithm it can be stated that the main diagonal elements must be different from zero.
In the case of sparsely populated matrices , the effort of the process per iteration is significantly reduced.
Description of the procedure in matrix notation
The matrix of the linear system of equations is broken down into a diagonal matrix , a strict upper triangular matrix and a strict lower triangular matrix , so that:
 .
Then applies in every iteration step . After changing, the result is formal
 and from it .
You then define a start vector and insert it into the iteration rule:
 .
Diagonal dominance and convergence
The method converges linearly if the spectral radius of the iteration matrix is less than 1. In this case, Banach's fixed point theorem or the Neumann series of convergence theorem (to a sufficiently large power of ) can be applied and the method converges. In the opposite case, the method diverges if the righthand side of the system of equations contains a portion of an eigenvector for an eigenvalue with an amount greater than 1. The smaller the spectral radius, the faster the method converges.
The determination of the spectral radius is mostly unsuitable for practical use, which is why more convenient criteria can be found via the sufficient condition that a matrix standard of the process matrix must be less than 1. This matrix norm is at the same time the contraction constant in the sense of Banach's fixed point theorem.
In the event that both and are "small" matrices with regard to the selected matrix norm, there is the following estimate of the matrix norm for (see Neumann series for the first factor):
The last expression is also less than 1. Although the convergence condition is that of the Jacobi method, the estimate of the contraction constant of the GaussSeidel method obtained in this way is always less than or equal to the corresponding estimate of the contraction constant of the Jacobi method.
The simplest and most common adequate convergence criterion of the diagonal dominance results for the supremum norm of the vectors and the row sum norm as the associated induced matrix norm . It requires the line sum criterion to be met, i.e. the inequality
 for .
The greater the smallest difference between the right and left sides of the inequality, the faster the method converges. One can try to increase this difference by interchanging rows and columns, i. H. by renumbering the rows and columns. One can, for example, strive to bring the elements of the matrix with the largest amount to the diagonal. The line interchanges must of course also be carried out on the righthand side of the equation system.
Applications
The method is unsuitable for modern applications such as the solution of large, sparse systems of equations that originate from the discretization of partial differential equations . However, it is used with success as a preconditioner in Krylow subspace processes or as a straightener in multigrid processes.
Extension to nonlinear systems of equations
The idea of the GaussSeidel method can be extended to nonlinear systems of equations with a multidimensional nonlinear function . As in the linear case, in the th step the th equation is solved with respect to the th variable, using the previous approximate value for the other variables and the previously calculated new value for the previous variables:
 For k = 1, ... until convergence
 For i = 1, ..., n :
 Dissolve after .
 For i = 1, ..., n :
Here, solving is usually to be understood as the application of a further iterative method to solve nonlinear equations. In order to distinguish this method from the GaußSeidel method for linear systems of equations, the GaußSeidel process is often used . The convergence of the process follows from Banach's Fixed Point Theorem again as linear.
literature
 A. Meister: Numerics of linear systems of equations , 2nd edition, Vieweg 2005, ISBN 3528131357
 R. Barrett et al .: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods , 2nd Edition, SIAM Philadelphia, 1994
 WC Rheinboldt: Methods for Solving Systems of Nonlinear Equations , 2nd edition, SIAM, 1998, ISBN 089871415X