The real over-relaxation parameter ensures that the method converges faster than the Gauss-Seidel method , which is a special case of this formula .
algorithm
As an algorithm sketch with termination condition is possible:
choose
repeat
for up
next
to
An error limit was assumed as the input variable for the algorithm; the approximate solution is the vectorial return size . The error barrier measures the size of the last change in the variable vector.
In the case of sparsely populated matrices , the effort of the process per iteration is significantly reduced.
Derivation
The SOR method results from relaxation of the Gauß-Seidel method:
for you get Gauss-Seidel back. In order to analyze the process, the formulation is suitable as a splitting process, which allows the properties of the process to be analyzed. The matrix is written as the sum of a diagonal matrix and two strict triangular matrices and :
With
The system of linear equations is then equivalent to
and the method is consistent and converges if and only if the spectral radius is.
The above formulation for the component-wise calculation of the can be obtained from this matrix representation if the triangular shape of the matrix is used . This matrix can be inverted directly by inserting it forward .
convergence
It can be shown that for a symmetric positive definite matrix the procedure converges for each . In order to accelerate the convergence compared to the Gauss-Seidel method, heuristic values between and are used . The optimal choice depends on the coefficient matrix . Values can be chosen to stabilize a solution that is otherwise slightly divergent.
Kahan's theorem (1958) shows that there is no longer any convergence outside the interval .
It can be shown that the optimal relaxation parameter is given by , where the spectral radius is the method matrix of the Jacobi method .
literature
Andreas Meister: Numerics of linear systems of equations . 3. Edition. Vieweg, 2007, ISBN 3-8348-0431-2