List of numerical procedures
The list of numerical methods lists methods of numerical mathematics according to application areas.
Systems of linear equations
 Gaussian elimination method (or LR decomposition): A classic direct method  but too complex for large matrices.
 Cholesky decomposition : For symmetric positive definite matrices, similar to the LR decomposition, a symmetric decomposition can be created with half the effort.
 QR decomposition : Also a direct method with at least twice the running time compared to the Gauss method but with better stability properties. Implemented using household transformations is particularly suitable for linear adjustment problems.

Splitting method : Classic iterative methods .
 GaussSeidel method : Also known as the singlestep method.
 Jacobi method : Also known as the total step method .
 Richardson process
 Chebyshev iteration : a splitting process with additional acceleration
 SOR procedure
 SSOR procedure
 Iterative refinement : Iterative improvement of a direct method, relationship to the basic idea of the Krylow subspace method
 Krylow subspace method : Modern iterative methods that are intended for large, sparse systems of equations . An important special case for symmetrically positive definite problems is the conjugate gradient method .
 Multigrid method: A modern method with linear complexity especially for systems of equations that originate from partial differential equations .
 Preconditioning : A technique to improve the condition of a matrix in Krylow's subspace method .
 ILU dismantling : an important preconditioning procedure.
Nonlinear systems of equations
 Bisection : A very simple method for searching for zeros , which is based on halving an interval. Converges linearly, the error is roughly halved in each iteration step.
 BisectionExclusion : Special bisection procedure for polynomials, which restricts all zeros within a start region as precisely as required.
 Regula falsi , secant method : Simple iterative method for determining the zeros of onedimensional functions.
 Fixed point method: A class of linearly convergent methods for finding fixed points of functions, also in the multidimensional.
 Newton's method : A quadratically convergent method for finding zeros of differentiable functions. Also applicable in the multidimensional, a linear system of equations has to be solved in each iteration step.
 QuasiNewton method : A modification of the Newton method in which only an approximation of the derivative is used.
 Halley method , EulerTschebyschow method : cubic convergent method for finding zeros of twice differentiable functions. Can also be used in the multidimensional. Then two systems of linear equations have to be solved in each step.
 Gradient Method: A slow method for solving a minimization problem .
 GaussNewton method : A locally quadratically convergent method for solving nonlinear compensation problems .
 LevenbergMarquardt algorithm : a combination of the GaussNewton method with a trust region strategy.
 Homotopy method: A method in which a freely selectable problem with a simple solution is constantly connected to a given problem. In many cases the solution of the simple problem can be traced to a solution of the real problem.
 Bairstow's method : A special iteration method to determine complex zeros of polynomials using real operations.
 Weierstraß (DochevDurandKernerPresic) method , AberthEhrlich method , separating circle method : Special methods derived from the Newton method for the simultaneous determination of all complex zeros of a polynomial.
Numerical integration
 NewtonCotes formulas : Simple quadrature formulas based on polynomial interpolation.
 Gaussian quadrature : quadrature formulas with optimal order of convergence.
 Romberg Integration : A Technique for Improving NewtonCotes Formulas.
 Midpoint rule
 Trapezoidal rule
 Simpson rule / Kepler's barrel rule
 Lie integration : Integration method that is based on a generalized differential operator
Approximation and interpolation
 Polynomial interpolation : interpolation using polynomials.
 Spline interpolation : Interpolation using piecewise continuous polynomials.
 Trigonometric interpolation : interpolation using trigonometric polynomials.
 Remez algorithm : Finds the optimal approximation with regard to the supremum norm.
 De Casteljau Algorithm : The algorithm for calculating Bézier curves .
optimization
 Mountaineering algorithm: general class of optimization methods
 BFGS method : Method for solving nonlinear optimization problems.
 BranchandBound : enumeration method for solving integer optimization problems.
 Cutting plane method , branchandcut : method for solving integer linear optimization problems .
 Pivot method : basic exchange method of linear optimization .
 Simplex method : family of pivot methods of linear optimization .
 Downhill Simplex Method : A derivativeless method of nonlinear optimization.
 Inner point method : An iterative method also for nonlinear optimization problems.
 Logarithmic barrier method: Also an iterative method for nonlinear optimization problems.
 Simulated cooling : A heuristic method for optimization problems of high complexity.
Numerics of ordinary differential equations
 General linear processes : This process class allows a uniform representation of most of the processes from this list here
 Euler's polygon method: The simplest solution method, a 1step onestep method.
 Onestep procedure: procedures that only use information from the current time step to calculate the closest approximation.
 Multistep procedure: procedures that use information from the last time steps. Depending on the number of time steps, the corresponding start values are e.g. B. to be determined with a onestep process.
 BDF method : a special family of multistep methods for stiff initial value problems
 AdamsBashforth method : family of explicit multistep methods.
 AdamsMoulton method : family of implicit multistep methods.
 Predictorcorrector method : The combination of an explicit and an implicit multistep method with the same error order. The explicit procedure produces an approximation (the socalled predictor), the implicit procedure improves the approximation value (the socalled corrector).
 RungeKutta process : Family of onestep processes including the classic RungeKutta process .
 RosenbrockWanner method : family of linearimplicit onestep methods for stiff initial value problems
 NewtonStörmerVerletLeapfrogMethod : Popular symplectic integration method for problems of classical dynamics, e.g. B. Planetary motion , up to molecular dynamics, with improved conservation of dynamic invariants.
Numerics of partial differential equations
 Finite element method : A modern, flexible method for solving mainly elliptical partial differential equations.
 Finite Volume Method : A modern method for solving conservation equations .
 Finite difference method : A classic method for arbitrary partial differential equations.
 Boundary element method : A method for the solution of elliptical PDGLs, whereby only the area edge and not the area itself (such as in FEM) has to be discretized.
 Spectral method: A method that uses very high order polynomials to discretize.
 Level Set Method : A modern method of tracking moving edges.
 Finite point method : a newer calculation method with only points but no elements.
 Finite stripe method : special, simplified form of FEM with stripes as elements
 Orthogonal collocation : Method for any partial differential equation, often combined with the finite difference method.
 MaterialPointMethod : Method for different (especially strongly deforming) materials, which are approximated by point and a dynamic grid
Calculation of eigenvalues
 QR algorithm : Calculation of all eigenvalues, but associated with high costs.
 LR algorithm : Also called stair iteration, forerunner of the QR method, but less reliable.
 Power method: This allows the calculation of the greatest eigenvalue.
 Subspace iteration : This is a multidimensional extension of the power method and allows the simultaneous calculation of several of the largest eigenvalues.
 Inverse iteration : This allows the rapid calculation of eigenvalues close to a shift.
 Rayleigh quotient iteration : A special, very fast converging variant of the inverse iteration with shift.
 Lanczos method : calculation of some eigenvalues of large sparse matrices .
 Arnoldi method : calculation of some eigenvalues of large sparse matrices.
 Jacobi method : calculation of all eigenvalues and eigenvectors of small symmetrical matrices.
 JacobiDavidson method : calculation of some eigenvalues of large sparse matrices.
 Folded Spectrum Method : Calculation of an eigenvalue and the associated eigenvector near a shift (from the center of the spectrum).
Others
 Fast Fourier Transform (FFT)
 Wavelet transform
 Regularization method for solving poorly posed problems, especially the classic TikhonovPhillips regularization method
 Multipole method
 GramSchmidt's orthogonalization method
 Heron method for calculating the roots
 Horner scheme for calculating polynomial values
 Bresenham's algorithm for line or circular interpolation in computer graphics
 Extrapolation
 Summation method and sequence transformations for divergent sequences and series
 Convergence acceleration for convergent sequences and series of real or complex numbers , vectors and matrices
 CORDIC is an efficient iterative algorithm with the help of which many transcendent functions can be implemented in microcomputers and digital circuits .