List of numerical analysis topics
From Wikipedia, the free encyclopedia
This is a list of numerical analysis topics, by Wikipedia page.
[edit] General
- Iterative method
 - Series acceleration — methods to accelerate the speed of convergence of a series
- Aitken's delta-squared process — most useful for linearly converging sequences
 - Minimum polynomial extrapolation — for vector sequences
 - Richardson extrapolation
 - Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums
 - Van Wijngaarden transformation — for accelerating the convergence of an alternating series
 
 - Level set method
- Level set (data structures) — data structures for representing level sets
 
 - Abramowitz and Stegun — book containing formulas and tables of many special functions
- Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun
 
 - Curse of dimensionality
 - Local convergence and global convergence — whether you need a good initial guess to get convergence
 - Superconvergence
 - Discretization
- Collocation method — discretizes a continuous equation by requiring it only to hold at certain points
 
 - Difference quotient
 - Complexity:
- Computational complexity of mathematical operations
 - Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs
 
 - Symbolic-numeric computation — combination of symbolic and numeric methods
 - ABS methods
 - Cultural aspects:
- International Workshops on Lattice QCD and Numerical Analysis
 - Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by Nick Trefethen in 2002
 
 
[edit] Error
- Approximation
 - Approximation error
 - Arithmetic precision
 - Condition number
 - Discretization error
 - Floating point number
- Guard digit — extra precision introduced during a computation to reduce round-off error
 - Arbitrary-precision arithmetic
 - Truncation — rounding a floating-point number by discarding all digits after a certain digit
 
 - Interval arithmetic — represent every number by two floating-point numbers guaranteed to have the unknown number between them
 - Loss of significance
 - Numerical error
 - Numerical stability
 - Error propagation:
 - Relative difference — the relative difference between x and y is |x − y| / max(|x|, |y|)
 - Round-off error
 - Significant figures
- False precision — giving more significant figures than appropriate
 
 - Truncation error — error committed by doing only a finite numbers of steps
 - Well-posed problem
 - Affine arithmetic
 
[edit] Elementary and special functions
- Summation:
 - Multiplication:
- Multiplication algorithm — general discussion, simple methods
 - Karatsuba algorithm — the first algorithm which is faster than straightforward multiplication
 - Toom–Cook multiplication — generalization of Karatsuba multiplication
 - Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast
 - Fürer's algorithm — asymptotically slightly faster than Schönhage-Strassen
 
 - Exponentiation:
 - Polynomials:
- Horner scheme
 - Estrin's scheme — modification of the Horner scheme with more possibilities for parallellization
 - Clenshaw algorithm
 - De Casteljau's algorithm
 
 - Square roots and other roots:
- Integer square root
 - Methods of computing square roots
 - nth root algorithm
 - Shifting nth root algorithm — similar to long division
 - Alpha max plus beta min algorithm — approximates (x2 + y2)1/2
 - Fast inverse square root — calculates 1 / √x using details of the IEEE floating-point system
 
 - Elementary functions (exponential, logarithm, trigonometric functions):
- Generating trigonometric tables
 - CORDIC — shift-and-add algorithm using a table of arc tangents
 - BKM algorithm — shift-and-add algorithm using a table of logarithms and complex numbers
 
 - Gamma function:
- Lanczos approximation
 - Spouge's approximation — modification of Stirling's approximation; easier to apply than Lanczos
 
 - Spigot algorithm — algorithms that can compute individual digits of a real number
 - Computation of π:
- Liu Hui's π algorithm — first algorithm that can compute π to arbitrary precision
 - Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic-geometric mean
 - Bailey–Borwein–Plouffe formula — can be used to compute individual hexadecimal digits of π
 - Borwein's algorithm — iteration which converges quartically to 1/π, and other algorithms
 - Chudnovsky algorithm — fast algorithm that calculates a hypergeometric series
 
 
[edit] Numerical linear algebra
Numerical linear algebra — study of numerical algorithms for linear algebra problems
[edit] Basic concepts
- Types of matrices appearing in numerical analysis:
- Sparse matrix
 - Circulant matrix
 - Triangular matrix
 - Diagonally dominant matrix
 - Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries
 - Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle)
 - Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues
 
 - Algorithms for matrix multiplication:
- Strassen algorithm
 - Coppersmith–Winograd algorithm
 - Cannon's algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d grid
 - Freivald's algorithm — a randomized algorithm for checking the result of a multiplication
 
 - Matrix decompositions:
- LU decomposition — lower triangular times upper triangular
 - QR decomposition — orthogonal matrix times triangular matrix
 - Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix
 - Decompositions by similarity:
- Eigendecomposition — decomposition in terms of eigenvectors and eigenvalues
 - Jordan normal form — bidiagonal matrix of a certain form; generalizes the eigendecomposition
 - Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix
 - Schur decomposition — similarity transform bringing the matrix to a triangular matrix
 
 
 
[edit] Solving systems of linear equations
- Gaussian elimination
- Row echelon form — matrix in which all entries below a nonzero entry are zero
 - Gauss–Jordan elimination — variant in which the entries below the pivot are also zeroed
 - Montante's method — variant which ensures that all entries remain integers if the initial matrix has integer entries
 - Tridiagonal matrix algorithm — simplified form of Gaussian elimination for tridiagonal matrices
 
 - LU decomposition — write a matrix as a product of an upper- and a lower-triangular matrix
- Crout matrix decomposition
 - LU reduction — a special parallelized version of a LU decomposition algorithm
 
 - Block LU decomposition
 - Cholesky decomposition — for solving a system with a positive definite matrix
 - Frontal solver — for sparse matrices; used in finite element methods
 - Levinson recursion — for Toeplitz matrices
 - SPIKE algorithm — hybrid parallel solver for narrow-banded matrices
 - Iterative methods:
- Jacobi method
 - Gauss–Seidel method
- Successive over-relaxation (SOR) — a technique to accelerate the Gauss–Seidel method
 
 - Modified Richardson iteration
 - Conjugate gradient method (CG) — assumes that the matrix is positive definite
- Nonlinear conjugate gradient method — generalization for nonlinear optimization problems
 
 - Biconjugate gradient method (BiCG)
 - Generalized minimal residual method (GMRES) — based on the Arnoldi iteration
 - Stone's method (SIP - Srongly Implicit Procedure) — uses an incomplete LU decomposition
 - Kaczmarz method
 - Preconditioner
- Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization
 - Incomplete LU factorization — sparse approximation to the LU factorization
 
 
 - Underdetermined and overdetermined systems (systems that have no or more than one solution):
- Numerical computation of null space — find all solutions of an underdetermined system
 - Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual
 - Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as possible)
 
 
[edit] Eigenvalue algorithms
Eigenvalue algorithm — a numerical algorithm for locating the eigenvalues of a matrix
- Power iteration
 - Inverse iteration
 - Rayleigh quotient iteration
 - Arnoldi iteration — based on Krylov subspaces
 - Lanczos algorithm — Arnoldi, specialized for positive-definite matrices
 - QR algorithm
 - Jacobi eigenvalue algorithm — select a small submatrix which can be diagonalized exactly, and repeat
- Jacobi rotation — the building block, almost a Givens rotation
 
 - Divide-and-conquer eigenvalue algorithm
 - Folded spectrum method
 
[edit] Other concepts and algorithms
- Orthogonalization algorithms:
 - Krylov subspace
 - Block matrix pseudoinverse
 - Bidiagonalization
 - In-place matrix transposition — computing the transpose of a matrix without using much additional storage
 - Pivot element — entry in a matrix on which the algorithm concentrates
 - Matrix-free methods — methods that only access the matrix by evaluating matrix-vector products
 
[edit] Interpolation
- Nearest-neighbor interpolation — takes the value of the nearest neighbor
 
[edit] Polynomial interpolation
Polynomial interpolation — interpolation by polynomials
- Linear interpolation
 - Runge's phenomenon
 - Vandermonde matrix
 - Chebyshev polynomials
 - Chebyshev nodes
 - Lebesgue constant (interpolation)
 - Different forms for the interpolant:
- Newton polynomial
- Divided differences
 - Neville's algorithm — for evaluating the interpolant; based on the Newton form
 
 - Lagrange polynomial
 - Bernstein polynomial — especially useful for approximation
 
 - Newton polynomial
 - Extensions to multiple dimensions:
- Bilinear interpolation
 - Trilinear interpolation
 - Bicubic interpolation
 - Tricubic interpolation
 - Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant
 
 - Hermite interpolation
 - Birkhoff interpolation
 
[edit] Spline interpolation
Spline interpolation — interpolation by piecewise polynomials
- Spline (mathematics) — the piecewise polynomials used as interpolants
 - Perfect spline — polynomial spline of degree m whose mth derivate is ±1
 - Cubic Hermite spline
 - Monotone cubic interpolation
 - Hermite spline
 - Cardinal spline
 - Bézier spline
 - Smoothing spline
- Generalizations to more dimensions:
- Bézier triangle — maps a triangle to R3
 - Bézier surface — maps a square to R3
 
 
 - Generalizations to more dimensions:
 - B-spline
- Truncated power function
 - De Boor's algorithm — generalizes De Casteljau's algorithm
 
 - Nonuniform rational B-spline (NURBS)
 - Kochanek–Bartels spline
 - Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline
 - See also: List of numerical computational geometry topics
 
[edit] Trigonometric interpolation
Trigonometric interpolation — interpolation by trigonometric polynomials
- Discrete Fourier transform — can be viewed as trigonometric interpolation at equidistant points
 - Fast Fourier transform — a fast method for computing the discrete Fourier transform
- Bluestein's FFT algorithm
 - Bruun's FFT algorithm
 - Cooley-Tukey FFT algorithm
 - Split-radix FFT algorithm — variant of Cooley-Tukey that uses a blend of radices 2 and 4
 - Goertzel algorithm
 - Prime-factor FFT algorithm
 - Rader's FFT algorithm
 - Butterfly diagram
 - Twiddle factor — the trigonometric constant coefficients that are multiplied by the data
 - Methods for computing discrete convolutions with finite impulse response filters using the FFT:
 
 - Sigma approximation
 - Dirichlet kernel — convolving any function with the Dirichlet kernel yields its trigonometric interpolant
 - Gibbs phenomenon
 
[edit] Other interpolants
- Simple rational approximation
- Polynomial and rational function modeling — comparison of polynomial and rational interpolation
 
 - Wavelet
 - Inverse distance weighting
- Cascade algorithm — iterative algorithm to compute wavelets
 
 - Radial basis function (RBF) — a function of the form ƒ(x) = φ(|x−x0|)
- Polyharmonic spline — a commonly used radial basis function
 - Thin plate spline — a specific polyharmonic spline: r2 log r
 - Radial basis function network — neural network using radial basis functions as activation functions
 - Hierarchical RBF
 
 - Subdivision surface — constructed by recursively subdividing a piecewise linear interpolant
 - Slerp
 - Irrational base discrete weighted transform
 - Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound
- Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite
 
 - Pareto interpolation
 - Multivariate interpolation — the function being interpolated depends on more than one variable
- Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology
 - Lanczos resampling — based on convolution with a sinc function
 - Natural neighbor interpolation
 - PDE surface
 - Method based on polynomials are listed under Polynomial interpolation
 
 
[edit] Approximation theory
- Orders of approximation
 - Lebesgue's lemma
 - Curve fitting
 - Modulus of continuity — measures smoothness of a function
 - Minimax approximation algorithm — minimizes the maximum error over an interval (the L∞-norm)
 - Approximation by polynomials:
- Linear approximation
 - Bernstein polynomial — basis of polynomials useful for approximating a function
 - Remez algorithm — for constructing the best polynomial approximation in the L∞-norm
 - Bramble-Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions
 - Bernstein's constant — error when approximating |x| by a polynomial
 
 - Surrogate model — application: replacing a function that is hard to evaluate by a simpler function
 - Jackson's inequality — upper bound for best approximation by a trigonometric polynomial
 - Different approximations:
- Moving least squares
 - Padé approximant
- Padé table — table of Padé approximants
 
 - Szász-Mirakyan operator — approximation by e−n xk on a semi-infinite interval
 - Szász-Mirakyan-Kantorovich operator
 - Baskakov operator — generalize Bernstein polynomials, Szász-Mirakyan operators, and Lupas operators
 - Favard operator — approximation by sums of Gaussians
 
 
[edit] Miscellaneous
- Extrapolation
- Linear predictive analysis — linear extrapolation
 
 - Unisolvent functions — functions for which the interpolation problem has a unique solution
 - Regression analysis
 - Curve-fitting compaction
 - Interpolation (computer programming) — interpolation in the context of computer graphics
 
[edit] Finding roots of nonlinear equations
- See #Numerical linear algebra for linear equations
 
Root-finding algorithm — algorithms for solving the equation f(x) = 0
- General methods:
- Bisection method — simple and robust; linear convergence
- Lehmer-Schur algorithm — variant for complex functions
 
 - Fixed point iteration
 - Newton's method — based on linear approximation around the current iterate; quadratic convergence
- Newton fractal
 - Quasi-Newton method — uses an approximation of the Jacobian:
- Broyden's method — uses a rank-one update for the Jacobian
 - SR1 formula — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian
 - Davidon-Fletcher-Powell formula — update of the Jacobian in which the matrix remains positive definite
 - BFGS method — rank-two update of the Jacobian in which the matrix remains positive definite
 - L-BFGS method — truncated, matrix-free variant of BFGS method suitable for large problems
 
 - Steffensen's method — uses divided differences instead of the derivative
 
 - Secant method — based on linear interpolation at last two iterates
 - False position method — secant method with ideas from the bisection method
 - Müller's method — based on quadratic interpolation at last three iterates
 - Inverse quadratic interpolation — similar to Müller's method, but interpolates the inverse
 - Brent's method — combines bisection method, secant method and inverse quadratic interpolation
 - Ridders' method — fits a linear function times an exponential to last two iterates and their midpoint
 - Halley's method — uses f, f' and f''; achieves the cubic convergence
 - Householder's method — uses first d derivatives to achieve order d + 1; generalizes Newton's and Halley's method
 
 - Bisection method — simple and robust; linear convergence
 - Methods for polynomials:
- Aberth method
 - Bairstow's method
 - Durand-Kerner method
 - Graeffe's method
 - Jenkins-Traub algorithm — fast, reliable, and widely used
 - Laguerre's method
 - Splitting circle method
 
 - Analysis:
 - Numerical continuation — tracking a root as one parameters in the equation changes
 
[edit] Optimization
Optimization (mathematics) — algorithm for finding maxima or minima of a given function
[edit] Basic concepts
- Active set
 - Candidate solution
 - Constraint (mathematics)
 - Corner solution
 - Fitness function — (esp. in genetic algorithms) an approximation to the objective function that is easier to evaluate
 - Global optimum and Local optimum
 - Maxima and minima
 - Slack variable
 - Surplus variable
 - Continuous optimization
 - Discrete optimization
 
[edit] Linear programming
Linear programming (also treats integer programming) — objective function and constraints are linear
- Algorithms for linear programming:
- Simplex algorithm
- Bland's rule — rule to avoid cycling in the simplex method
 
 - Interior point method
 - Delayed column generation
 - k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set)
 
 - Simplex algorithm
 - Linear complementarity problem
 - Decompositions:
 - Fourier–Motzkin elimination
 - Hilbert basis (linear programming) — set of integer vectors in a convex cone which generate all integer vectors in the cone
 
[edit] Nonlinear programming
Nonlinear programming — the most general optimization problem in the usual framework
- Special cases of nonlinear programming:
- Quadratic programming
- Linear least squares
 - Frank–Wolfe algorithm
 - Sequential Minimal Optimization — breaks up large QP problems into a series of smallest possible QP problems
 - Bilinear program
 
 - Convex optimization
- Linear matrix inequality
 - Conic optimization
- Semidefinite programming
 - Second-order cone programming
 - Quadratic programming (see above)
 
 - Subgradient method — extension of steepest descent for problems with a nondifferentiable objective function
 
 - Geometric programming — problems involving signomials or posynomials
- Signomial — similar to polynomials, but exponents need not be integers
 - Posynomial — a signomial with positive coefficients
 
 - Quadratically constrained quadratic program
 - Least squares — the objective function is a sum of squares
- Non-linear least squares
 - Gauss–Newton algorithm
- Generalized Gauss–Newton method — for constrained nonlinear least-squares problems
 
 - Levenberg–Marquardt algorithm
 
 - Mathematical programming with equilibrium constraints — constraints include variational inequalities or complementarities
 - Univariate optimization:
- Golden section search
 - Successive parabolic interpolation — based on quadratic interpolation through the last three iterates
 
 
 - Quadratic programming
 - General algorithms:
- Concepts:
- Descent direction
 - Guess value — the initial guess for a solution with which an algorithm starts
 - Line search
 
 - Gradient descent
 - Successive linear programming (SLP) — replace problem by a linear programming problem, solve that, and repeat
 - Newton's method in optimization
- See also under Newton algorithm in the section Finding roots of nonlinear equations
 
 - Nonlinear conjugate gradient method
 - Nelder-Mead method
 - Rosenbrock methods — derivative-free method, similar to Nelder–Mead but with guaranteed convergence
 - Ternary search
 - Tabu search
 - Guided Local Search — modification of search algorithms which builds up penalties during a search
 - Reactive search optimization (RSO) — the algorithm adapts its parameters automatically
 - Least absolute deviations
 - Nearest neighbor search
 
 - Concepts:
 
[edit] Uncertainty and randomness
- Approaches to deal with uncertainty:
 - Random optimization algorithms:
- Simulated annealing
- Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation.
 - Great Deluge algorithm
 
 - Evolutionary algorithm, Evolution strategy
 - Memetic algorithm
 - Particle swarm optimization
 - Cooperative optimization
 - Stochastic tunneling
 - Harmony search — mimicks the improvisation process of musicians
 - see also the section Monte Carlo method
 
 - Simulated annealing
 
[edit] Theoretical aspects
- Convex analysis
 - Duality:
- Dual problem, Shadow price
 - Dual cone and polar cone
 - Fenchel's duality theorem — relates minimization problems with maximization problems of convex conjugates
 
 - Farkas' lemma
 - Karush–Kuhn–Tucker conditions
 - Lagrange multipliers
 - Complementarity theory — study of problems with constraints of the form 〈u, v〉 = 0
 - Danskin's theorem — used in the analysis of minimax problems
 - No free lunch in search and optimization
 - Relaxation technique (mathematics)
- Lagrangian relaxation
 - Linear programming relaxation — ignoring the integrality constraints in a linear programming problem
 
 - Self-concordant function
 - Reduced cost — cost for increasing a variable by a small amount
 - Hardness of approximation — computational complexity of getting an approximate solution
 
[edit] Applications
- In geometry:
- Geometric median — the point minimizing the sum of distances to a given set of points
 - Chebyshev center — the centre of the smallest ball containing a given set of points
 
 - Automatic label placement
 - Compressed sensing — reconstruct a signal from knowledge that it is sparse or compressible
 - Cutting stock problem
 - Demand optimization
 - Destination dispatch — an optimization technique for dispatching elevators
 - Energy minimization
 - Entropy maximization
 - Inventory control problem
 - Job-shop problem
 - Linear programming decoding
 - Multidisciplinary design optimization
 - Paper bag problem
 - Process optimization
 - Stigler diet
 - Stress majorization
 - Trajectory optimization
 - Wing shape optimization
 
[edit] Miscellaneous
- Combinatorial optimization
 - Dynamic programming
- Bellman equation
 - Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation
 - Backward induction — solving dynamic programming problems by reasoning backwards in time
 - Optimal stopping — choosing the optimal time to take a particular action
 
 - Global optimization:
 - Bilevel program — problem in which one problem is embedded in another
 - Multilevel programming — problem in which a series of problems is embedded in each other
 - Infinite-dimensional optimization
- Optimal control
- Pontryagin's minimum principle — infinite-dimensional version of Lagrange multipliers
 - DNSS point — initial state for certain optimal control problems with multiple optimal solutions
 
 - Shape optimization, Topology optimization — optimization over a set of regions
- Topological derivative — derivative with respect to changing in the shape
 
 
 - Optimal control
 - Optimal substructure
 - Algorithmic concepts:
 - Famous test functions for optimization:
- Rosenbrock function — two-dimensional function with a banana-shaped valley
 - Himmelblau's function — two-dimensional with four local minima, defined by f(x,y) = (x2 + y − 11)2 + (x + y2 − 7)2
 - Shekel function — multimodal and multidimensional
 
 - Mathematical Programming Society
 
[edit] Numerical quadrature (integration)
Numerical integration — the numerical evaluation of an integral
- Rectangle method — first-order method, based on (piecewise) constant approximation
 - Trapezoidal rule — second-order method, based on (piecewise) linear approximation
 - Simpson's rule — fourth-order method, based on (piecewise) quadratic approximation
 - Newton–Cotes formulas
 - Romberg's method — Richardson extrapolation applied to trapezium rule
 - Gaussian quadrature — highest possible degree with given number of points
- Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight (1 − x2)±1/2 on [−1, 1]
 - Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [−∞, ∞]
 - Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − x)α (1 + x)β on [−1, 1]
 - Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [0, ∞]
 - Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature
 - Gauss-Kronrod rules
 
 - Tanh-sinh quadrature — variant of Gaussian quadrature which works well with singularities at the end points
 - Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials
 - Adaptive quadrature — adapting the subintervals in which the integration interval is divided depending on the integrand
 - Monte Carlo integration — takes random samples of the integrand
- See also #Monte Carlo method
 
 - T-integration — a non-standard method
 - Lebedev quadrature — uses a grid on a sphere with octahedral symmetry
 - Sparse grid
 - Numerical differentiation
 - Euler–Maclaurin formula
 
[edit] Numerical ordinary differential equations
Numerical ordinary differential equations — the numerical solution of ordinary differential equations (ODEs)
- Euler method — the most basic method for solving an ODE
 - Explicit and implicit methods — implicit methods need to solve an equation at every step
 - Runge–Kutta methods — one of the two main classes of methods for initial-value problems
- Midpoint method — a second-order method with two stages
 - Heun's method — either a second-order method with two stages, or a third-order method with three stages
 - Bogacki–Shampine method — a third-order method with four stages (FSAL) and an embedded fourth-order method
 - Cash–Karp method — a fifth-order method with six stages and an embedded fourth-order method
 - Dormand–Prince method — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method
 - Runge–Kutta–Fehlberg method — a fifth-order method with six stages and an embedded fourth-order method
 - List of Runge–Kutta methods
 
 - Linear multistep method — the other main class of methods for initial-value problems
- Backward differentiation formula — implicit methods of order 2 to 6; especially suitable for stiff equations
 - Numerov's method — fourth-order method for equations of the form y'' = f(t,y)
 
 - Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order
 - Methods designed for the solution of ODEs from classical physics:
- Newmark-beta method — based on the extended mean-value theorem
 - Verlet integration — a popular second-order method
 - Leapfrog integration — another name for Verlet integration
 - Beeman's algorithm — a two-step method extending the Verlet method
 - Dynamic relaxation
 
 - Geometric integrator — a method that preserves some geometric structure of the equation
- Symplectic integrator — a method for the solution of Hamilton's equations that preserves the symplectic structure
- Variational integrator — symplectic integrators derived using the underlying variational principle
 - Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians
 
 
 - Symplectic integrator — a method for the solution of Hamilton's equations that preserves the symplectic structure
 - Adaptive stepsize — automatically changing the step size when that seems advantageous
 - Stiff equation — roughly, an ODE for which the unstable methods needs a very short step size, but stable methods do not
 - Methods for solving two-point boundary value problems (BVPs):
- Shooting method
 - Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval
 
 - Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:
- Constraint algorithm — for solving Newton's equations with constraints
 
 - Methods for solving stochastic differential equations (SDEs):
- Euler-Maruyama method — generalization of the Euler method for SDEs
 - Milstein method — a method with strong order one
 - Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs
 
 - Methods for solving integral equations:
- Nyström method — replaces the integral with a quadrature rule
 
 - Bi-directional delay line
 - Partial Element Equivalent Circuit (PEEC)
 - History of numerical solution of differential equations using computers
 
[edit] Numerical partial differential equations
Numerical partial differential equations — the numerical solution of partial differential equations (PDEs)
[edit] Finite difference methods
Finite difference method — based on approximating differential operators with difference operators
- Finite difference — the discrete analogue of a differential operator
- Difference operator — the numerator of a finite difference
 - Discrete Laplace operator — finite-difference approximation of the Laplace operator
 - Discrete Poisson equation — discrete analogue of the Poisson equation using the discrete Laplace operator
 
 - Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm
- Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours
 - Non-compact stencil — any stencil that is not compact
 - Five-point stencil — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid
 - Stencil codes — computer codes that update array elements according to some fixed pattern, called stencils
 
 - Finite difference methods for heat equation and related PDEs:
- FTCS scheme (forward-time central-space) — first-order explicit
 - Crank–Nicolson method — second-order implicit
 
 - Finite difference methods for hyperbolic PDEs like the wave equation:
- Lax–Friedrichs method — first-order explicit
 - Lax–Wendroff method — second-order explicit
 - MacCormack method — second-order explicit
 - Upwind scheme
 
 - Alternating direction implicit method — update using the flow in x-direction and then using flow in y-direction
 - Finite-difference time-domain method — a finite-difference method for electrodynamics
 
[edit] Finite element methods
Finite element method — based on a discretization of the space of solutions
- Finite element method in structural mechanics — a physical approach to finite element methods
 - Engineering treatment of the finite element method — an explanation of finete elements geared towards engineers
 - Galerkin method — a finite element method in which the residual is orthogonal to the finite element space
- Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous
 
 - Rayleigh-Ritz method — a finite element method based on variational principles
 - Spectral element method — high-order finite element methods
 - hp-FEM — variant in which both the size and the order of the elements are automatically adapted
 - Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis
 - Trefftz method
 - Finite element updating
 - Extended finite element method — puts functions tailored to the problem in the approximation space
 - Functionally graded elements — elements for describing functionally graded materials
 - Interval finite element method — combination of finite elements with interval arithmetic
 - Discrete exterior calculus — discrete form of the exterior calculus of differential geometry
 - Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations
 - Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the true solution
 - Patch test (finite elements) — simple test for the quality of a finite element
 - Multiple-point constraint (MPC)
 - List of finite element software packages
 - NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis
 - Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture
 
[edit] Other methods
- Spectral method — based on the Fourier transformation
 - Method of lines — reduces the PDE to a large system of ordinary differential equations
 - Boundary element method — based on transforming the PDE to an integral equation on the boundary of the domain
- Interval boundary element method — a version using interval arithmetics
 
 - Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically
 - Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics
- Godunov's scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation
 - MUSCL scheme — second-order variant of Godunov's scheme
 - AUSM — advection upstream splitting method
 - Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations
 - Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data)
 
 - Discrete element method — a method in which the elements can move freely relative to each other
 - Meshfree methods — does not use a mesh, but uses a particle view of the field
 - Methods designed for problems from electromagnetics:
- Finite-difference time-domain method — a finite-difference method
 - Transmission line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines
 - Uniform theory of diffraction — specifically designed for scattering problems
 
 - Particle-in-cell — used especially in fluid dynamics
 - High-resolution scheme
 - Shock capturing methods
 - Split-step method
 - Fast marching method
 - Lattice Boltzmann methods — for the solution of the Navier-Stokes equations
 - Roe solver — for the solution of the Euler equation
 - Relaxation method — a method for solving elliptic PDEs by converting them to evolution equations
 - Broad classes of methods:
- Mimetic methods — methods that respect in some sense the structure of the original problem
 - Multiphysics — models consisting of various submodels with different physics
 - Immersed boundary method — for simulating elastic structures immersed within fluids
 
 
[edit] Techniques for improving these methods
- Multigrid method — uses a hierarchy of nested meshes to speed up the methods
 - Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains
- Additive Schwarz method
 - Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information
 - Balancing domain decomposition (BDD) — preconditioner for symmetric positive definite matrices
 - Balancing domain decomposition by constraints (BDDC) — further development of BDD
 - Finite element tearing and interconnect (FETI)
 - FETI-DP — further development of FETI
 - Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape
 - Mortar methods — meshes on subdomain do not mesh
 - Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain
 - Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains
 - Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current
 - Schur complement method — early and basic method on subdomains that do not overlap
 - Schwarz alternating method — early and basic method on subdomains that overlap
 
 - Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom
 - Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary
 - Fast multipole method — hierarchical method for evaluating particle-particle interactions
 
[edit] Miscellaneous
- Analysis:
- Lax equivalence theorem — a consistent method is convergent if and only if it is stable
 - Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs
 - Von Neumann stability analysis — all Fourier components of the error should be stable
 - Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present
 - Numerical resistivity — the same, with resistivity instead of diffusion
 - Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods
 - Total variation diminishing — property of schemes that do not introduce spurious oscillations
 - Godunov's theorem — linear monotone schemes can only be of first order
 
 - Grids and meshes:
- Mesh generation
 - Geodesic grid — isotropic grid on a sphere
 - Spatial twist continuum — dual representation of a mesh consisting of hexahedra
 
 - Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions
 
[edit] Monte Carlo method
- Variants of the Monte Carlo method:
- Direct simulation Monte Carlo
 - Quasi-Monte Carlo method
 - Markov chain Monte Carlo
- Metropolis–Hastings algorithm
- Multiple-try Metropolis — modification which allows larger step sizes
 - Wang and Landau algorithm — extension of Metropolis Monte Carlo
 - Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm
 
 - Gibbs sampling
 - Coupling from the past
 
 - Metropolis–Hastings algorithm
 - Dynamic Monte Carlo method
 - Particle filter
 - Reverse Monte Carlo
 - Demon algorithm
 
 - Sampling methods:
- Inverse transform sampling — general and straightforward method but computationally expensive
 - Rejection sampling — sample from a simpler distribution but reject some of the samples
- Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments
 
 - For sampling from a normal distribution:
 - Convolution random number generator — generates a random variable as a sum of other random variables
 
 - Low-discrepancy sequence
 - Event generator
 - Parallel tempering
 - Umbrella sampling — improves sampling in physical systems with significant energy barriers
 - Variance reduction techniques:
 - Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables
 - Applications:
- Bond fluctuation model — for simulating the conformation and dynamics of polymer systems
 - Metropolis light transport
 - Monte Carlo method for photon transport
 - Monte Carlo methods in finance
 - Monte Carlo molecular modeling
 - Monte Carlo project — research project modelling of human exposure to food chemicals and nutrients
 - Quantum Monte Carlo
- Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation
 - Gaussian quantum Monte Carlo
 - Path integral Monte Carlo
 - Reptation Monte Carlo
 - Variational Monte Carlo
 
 - Methods for simulating the Ising model:
- Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters
 - Wolff algorithm — improvement of the Swendsen–Wang algorithm
 
 - Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems
 - Cross-entropy method — for multi-extremal optimization and importance sampling
 
 - Also see the list of statistics topics
 
[edit] Applications
- Computational physics
- Computational electromagnetics
 - Computational fluid dynamics (CFD)
- Large eddy simulation
 - Smoothed particle hydrodynamics
 - Acoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types
 
 - Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids
 - Climate model
 - Numerical weather prediction
 - Celestial mechanics
 
 - Computational chemistry
 - Computational statistics
 
[edit] Software
For software, see the list of numerical analysis software.

