Cramer's rule

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 12.47.243.66 (talk) at 19:51, 19 October 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Cramer's rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. It is named after Gabriel Cramer (1704 - 1752).

Computationally, it is inefficient for large matrices and thus not used in practical applications which may involve many equations. However, as no pivoting is needed, it is more efficient than Gaussian elimination for small matrices, particularly when SIMD operations are used.

Cramer's rule is of theoretical importance in that it gives an explicit expression for the solution of the system.

Elementary formulation

The system of equations is represented in matrix multiplication form as:

where the square matrix is invertible and the vector is the column vector of the variables: .

The theorem then states that: this sucks

where is the matrix formed by replacing the ith column of by the column vector . For simplicity, sometimes a single symbol like is used to represent and the notation is used to represent . Thus, Equation (1) can be compactly written as

Abstract formulation

Let R be a commutative ring, A an n×n matrix with coefficients in R. Then

where Adj(A) denotes the adjugate of A, det(A) is the determinant, and I is the identity matrix.

Example

A good way to use Cramer's rule on a 2×2 matrix is to use this formula:

Given

and
,

which in matrix format is

x and y can be found with Cramer's rule as:

and

The rules for 3×3 are similar. Given

,
and
,

which in matrix format is

x, y and z can be found like so:

,   ,   and  

Applications to differential geometry

Cramer's rule is also extremely useful for solving problems in differential geometry. Consider the two equations and . When u and v are independent variables, we can define and .

Finding an equation for is a trivial application of Cramer's rule.

First, calculate the first derivatives of F, G, x and y.

Substituting dx, dy into dF and dG, we have:

Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients:

Now, by Cramer's rule, we see that:

This is now a formula in terms of two Jacobians:

Similar formulae can be derived for , , .

Applications to algebra

Cramer's rule can be used to prove the Cayley-Hamilton theorem of linear algebra, as well as Nakayama's lemma, which is fundamental in commutative ring theory.

Applications to integer programming

Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is all integer has integer basic solutions. This makes the integer program substantially easier to solve.

External links