List of numerical procedures
The list of numerical methods lists methods of numerical mathematics according to application areas.
Systems of linear equations
- Gaussian elimination method (or LR decomposition): A classic direct method - but too complex for large matrices.
- Cholesky decomposition : For symmetric positive definite matrices, similar to the LR decomposition, a symmetric decomposition can be created with half the effort.
- QR decomposition : Also a direct method with at least twice the running time compared to the Gauss method but with better stability properties. Implemented using household transformations is particularly suitable for linear adjustment problems.
-
Splitting method : Classic iterative methods .
- Gauss-Seidel method : Also known as the single-step method.
- Jacobi method : Also known as the total step method .
- Richardson process
- Chebyshev iteration : a splitting process with additional acceleration
- SOR procedure
- SSOR procedure
- Iterative refinement : Iterative improvement of a direct method, relationship to the basic idea of the Krylow subspace method
- Krylow subspace method : Modern iterative methods that are intended for large, sparse systems of equations . An important special case for symmetrically positive definite problems is the conjugate gradient method .
- Multi-grid method: A modern method with linear complexity especially for systems of equations that originate from partial differential equations .
- Preconditioning : A technique to improve the condition of a matrix in Krylow's subspace method .
- ILU dismantling : an important preconditioning procedure.
Nonlinear systems of equations
- Bisection : A very simple method for searching for zeros , which is based on halving an interval. Converges linearly, the error is roughly halved in each iteration step.
- Bisection-Exclusion : Special bisection procedure for polynomials, which restricts all zeros within a start region as precisely as required.
- Regula falsi , secant method : Simple iterative method for determining the zeros of one-dimensional functions.
- Fixed point method: A class of linearly convergent methods for finding fixed points of functions, also in the multi-dimensional.
- Newton's method : A quadratically convergent method for finding zeros of differentiable functions. Also applicable in the multi-dimensional, a linear system of equations has to be solved in each iteration step.
- Quasi-Newton method : A modification of the Newton method in which only an approximation of the derivative is used.
- Halley method , Euler-Tschebyschow method : cubic convergent method for finding zeros of twice differentiable functions. Can also be used in the multidimensional. Then two systems of linear equations have to be solved in each step.
- Gradient Method: A slow method for solving a minimization problem .
- Gauss-Newton method : A locally quadratically convergent method for solving non-linear compensation problems .
- Levenberg-Marquardt algorithm : a combination of the Gauss-Newton method with a trust region strategy.
- Homotopy method: A method in which a freely selectable problem with a simple solution is constantly connected to a given problem. In many cases the solution of the simple problem can be traced to a solution of the real problem.
- Bairstow's method : A special iteration method to determine complex zeros of polynomials using real operations.
- Weierstraß (Dochev-Durand-Kerner-Presic) method , Aberth-Ehrlich method , separating circle method : Special methods derived from the Newton method for the simultaneous determination of all complex zeros of a polynomial.
Numerical integration
- Newton-Cotes formulas : Simple quadrature formulas based on polynomial interpolation.
- Gaussian quadrature : quadrature formulas with optimal order of convergence.
- Romberg Integration : A Technique for Improving Newton-Cotes Formulas.
- Midpoint rule
- Trapezoidal rule
- Simpson rule / Kepler's barrel rule
- Lie integration : Integration method that is based on a generalized differential operator
Approximation and interpolation
- Polynomial interpolation : interpolation using polynomials.
- Spline interpolation : Interpolation using piecewise continuous polynomials.
- Trigonometric interpolation : interpolation using trigonometric polynomials.
- Remez algorithm : Finds the optimal approximation with regard to the supremum norm.
- De Casteljau Algorithm : The algorithm for calculating Bézier curves .
optimization
- Mountaineering algorithm: general class of optimization methods
- BFGS method : Method for solving non-linear optimization problems.
- Branch-and-Bound : enumeration method for solving integer optimization problems.
- Cutting plane method , branch-and-cut : method for solving integer linear optimization problems .
- Pivot method : basic exchange method of linear optimization .
- Simplex method : family of pivot methods of linear optimization .
- Downhill Simplex Method : A derivativeless method of nonlinear optimization.
- Inner point method : An iterative method also for nonlinear optimization problems.
- Logarithmic barrier method: Also an iterative method for nonlinear optimization problems.
- Simulated cooling : A heuristic method for optimization problems of high complexity.
Numerics of ordinary differential equations
- General linear processes : This process class allows a uniform representation of most of the processes from this list here
- Euler's polygon method: The simplest solution method, a 1-step one-step method.
- One-step procedure: procedures that only use information from the current time step to calculate the closest approximation.
- Multi-step procedure: procedures that use information from the last time steps. Depending on the number of time steps, the corresponding start values are e.g. B. to be determined with a one-step process.
- BDF method : a special family of multi-step methods for stiff initial value problems
- Adams-Bashforth method : family of explicit multi-step methods.
- Adams-Moulton method : family of implicit multistep methods.
- Predictor-corrector method : The combination of an explicit and an implicit multi-step method with the same error order. The explicit procedure produces an approximation (the so-called predictor), the implicit procedure improves the approximation value (the so-called corrector).
- Runge-Kutta process : Family of one-step processes including the classic Runge-Kutta process .
- Rosenbrock-Wanner method : family of linear-implicit one-step methods for stiff initial value problems
- Newton-Störmer-Verlet-Leapfrog-Method : Popular symplectic integration method for problems of classical dynamics, e.g. B. Planetary motion , up to molecular dynamics, with improved conservation of dynamic invariants.
Numerics of partial differential equations
- Finite element method : A modern, flexible method for solving mainly elliptical partial differential equations.
- Finite Volume Method : A modern method for solving conservation equations .
- Finite difference method : A classic method for arbitrary partial differential equations.
- Boundary element method : A method for the solution of elliptical PDGLs, whereby only the area edge and not the area itself (such as in FEM) has to be discretized.
- Spectral method: A method that uses very high order polynomials to discretize.
- Level Set Method : A modern method of tracking moving edges.
- Finite point method : a newer calculation method with only points but no elements.
- Finite stripe method : special, simplified form of FEM with stripes as elements
- Orthogonal collocation : Method for any partial differential equation, often combined with the finite difference method.
- Material-Point-Method : Method for different (especially strongly deforming) materials, which are approximated by point and a dynamic grid
Calculation of eigenvalues
- QR algorithm : Calculation of all eigenvalues, but associated with high costs.
- LR algorithm : Also called stair iteration, forerunner of the QR method, but less reliable.
- Power method: This allows the calculation of the greatest eigenvalue.
- Subspace iteration : This is a multidimensional extension of the power method and allows the simultaneous calculation of several of the largest eigenvalues.
- Inverse iteration : This allows the rapid calculation of eigenvalues close to a shift.
- Rayleigh quotient iteration : A special, very fast converging variant of the inverse iteration with shift.
- Lanczos method : calculation of some eigenvalues of large sparse matrices .
- Arnoldi method : calculation of some eigenvalues of large sparse matrices.
- Jacobi method : calculation of all eigenvalues and eigenvectors of small symmetrical matrices.
- Jacobi-Davidson method : calculation of some eigenvalues of large sparse matrices.
- Folded Spectrum Method : Calculation of an eigenvalue and the associated eigenvector near a shift (from the center of the spectrum).
Others
- Fast Fourier Transform (FFT)
- Wavelet transform
- Regularization method for solving poorly posed problems, especially the classic Tikhonov-Phillips regularization method
- Multipole method
- Gram-Schmidt's orthogonalization method
- Heron method for calculating the roots
- Horner scheme for calculating polynomial values
- Bresenham's algorithm for line or circular interpolation in computer graphics
- Extrapolation
- Summation method and sequence transformations for divergent sequences and series
- Convergence acceleration for convergent sequences and series of real or complex numbers , vectors and matrices
- CORDIC is an efficient iterative algorithm with the help of which many transcendent functions can be implemented in microcomputers and digital circuits .