Linear Algebra

from Wikipedia, the free encyclopedia

The linear algebra (including vector algebra ) is a branch of mathematics that deals with vector spaces and linear maps employs between them. This includes in particular the consideration of linear systems of equations and matrices .

Vector spaces and their linear mappings are an important tool in many areas of mathematics. Outside of pure mathematics, there are applications in the natural sciences , computer science and economics (for example in optimization ).

Linear algebra emerged from two specific requirements: on the one hand, the solving of linear systems of equations, on the other hand, the mathematical description of geometric objects, the so-called analytical geometry (this is why some authors refer to linear algebra as linear geometry ).

history

The beginnings of algebra, and thus the term itself, go back as far as possible to the Persian- choresm mathematician , astronomer , geographer and polymath Al-Khwarizmi , who had to translate his works into Arabic due to the Islamization in Iran and so on the name “al-jabr " came. The term algebra is derived from this.

While the development of algebra began in ancient Egypt, the development of linear algebra as an independent sub-area did not begin until the 17th century with the theory of the determinant . The development of this theory was started independently by Gottfried Wilhelm Leibniz and Seki Takakazu . In 1750 Gabriel Cramer published Cramer's rule, named after him . This was the first time that a solution formula for many systems of linear equations was in possession.

The history of modern linear algebra dates back to 1843 and 1844 . In 1843 William Rowan Hamilton (from whom the term vector originates) devised an extension of the complex numbers with the quaternions . In 1844 Hermann Graßmann published his book Die lineale expansion theory. Arthur Cayley then introduced one of the most basic algebraic ideas in 1857 with the matrices.

From the 20th century onwards, the concept of vector space was mostly used . In particular, the mathematicians August Ferdinand Möbius , Constantin Carathéodory and Hermann Weyl did the preparatory work. For example, it was found that linear mappings between finite-dimensional vector spaces can be described by matrices . Based on this knowledge, Stefan Banach was the first to give an axiomatic definition for real vector spaces.

Systems of linear equations

A system of linear equations is a combination of equations of the type

Such systems of equations can be obtained from many everyday questions, for example:

In what proportion do you have to mix a 30% solution (corresponds to ) and a 60% solution (corresponds to ) in order to obtain a 40% solution?

The essential abstraction step in linear algebra is to understand the left-hand side as a function of the unknowns (in this case the set of the respective solutions):

Then the solution of the system of equations becomes the task: Find a such that

applies. Overwriting is only a formalism to be able to deal with more than one number at the same time.

Instead of simply writing down the relevant numbers in the form of a rectangle and calling the object a matrix :

You can see that the function has special properties, it is a linear mapping . If there is a solution to the system of equations and a solution to the system of equations , then is

a solution of . You can also write that in the form . If further is any real number , then is ; is there

.

Analytical geometry

The other origin of linear algebra can be found in the arithmetical description of the 2- and 3-dimensional (Euclidean) space, also called "visual space". With the help of a coordinate system , points in space can be described by triples of numbers. The mapping type of the displacement leads to the concept of the vector, which indicates the direction and amount of the displacement. Many physical quantities , for example forces , always have this directional aspect.

Since vectors can also be described by triples of numbers , the separation between vectors and points is blurred: A point corresponds to its position vector , which points from the coordinate origin .

Many of the mapping types considered in classical geometry, for example rotations about axes through the origin or reflections on planes through the origin, belong to the class of linear mapping that has already been mentioned above.

Vector spaces and linear algebra

The concept of vector space emerges as an abstraction from the above examples: A vector space is a set, the elements of which are called vectors, together with

This addition and the scalar multiplication have to fulfill some simple properties that also apply to the vectors in the visual space.

One could say that vector spaces are precisely defined in such a way that one can speak of linear mappings between them.

In a way, the concept of vector space is already too general for linear algebra. One can assign a dimension to each vector space , for example the plane has dimension and the visual space has dimension . However, there are vector spaces whose dimensions are not finite, which means that many of the known properties are lost. However, it has proven to be very successful to equip infinitely dimensional vector spaces with an additional topological structure ; the investigation of topological vector spaces is the subject of functional analysis .

(The rest of this article deals with the case of finite dimensions.)

Important sentences and results

Every vector space has at least one basis . Every two bases of a vector space have the same number of elements; only therefore does it make sense to speak of the dimension of a vector space. For totals and averages of subspaces the true dimension formula and for the dimensions of factor spaces formula .

Each linear mapping is clearly defined by specifying the images on a basis of . The homomorphism theorem and the rank theorem apply to linear mappings . Linear maps can be represented by matrices with respect to fixed bases . The execution of linear images one after the other corresponds to the multiplication of their representation matrices.

A linear system of equations with , and is solvable if and only if the rank of the matrix is equal to the rank of the extended coefficient matrix . In this case the solution set of the system is an affine subspace of the dimension . For equation systems that are not too large, the ranking and the calculation of the solution space can be carried out using the Gaussian elimination method.

A linear mapping (i.e. an endomorphism ) of a finite-dimensional vector space can already be inverted if it is injective or surjective. Again, this is precisely the case when its determinant is not equal to zero. It follows from this that the eigenvalues ​​of an endomorphism are exactly the zeros of its characteristic polynomial . Another important statement about the characteristic polynomial is the Cayley-Hamilton theorem .

An endomorphism (or a square matrix) can be diagonalized if the characteristic polynomial is divided into linear factors and for each eigenvalue its algebraic multiplicity is equal to the geometric multiplicity, i.e. the zero order of the eigenvalue in the characteristic polynomial is equal to the dimension of the associated eigenspace . Equivalent to this is the existence of a basis of the vector space, which consists of the eigenvectors of the linear mapping. Endomorphisms, the characteristic polynomial of which is broken down into linear factors, can still be trigonalized , so they can be represented by a triangular matrix. A somewhat deeper result is that the representing matrix can even be brought into Jordanian normal form .

In vector spaces on which an additional scalar product is given, a norm is used to define. In these scalar product spaces there always exist orthonormal bases that can be constructed, for example, using the Gram-Schmidt orthonormalization method. According to the projection theorem , the best approximation can be determined in these spaces from a sub-vector space by orthogonal projection .

Regarding the diagonalisability of endomorphisms in scalar product spaces, the question arises whether an orthonormal basis from eigenvectors exists. The central result for this is the spectral theorem . In the real case, in particular, the following applies: For every symmetric matrix there is an orthogonal matrix , so that is a diagonal matrix. Applying this result to quadratic forms , we get the principle of the principal axis transformation .

Also bilinear and sesquilinear can be represented by matrices of fixed chosen bases. A bilinear form is symmetric and positive definite , i.e. a scalar product, if and only if its representing matrix is ​​symmetric and positive definite. A symmetric matrix is ​​positive definite if and only if all of its eigenvalues ​​are positive. In general, Sylvester's law of inertia applies to symmetrical bilinear forms and Hermitian sesquilinear forms , which states that the number of positive and negative eigenvalues ​​of the representing matrices does not depend on the choice of the basis.

Vectors and matrices

Vectors of finite-dimensional spaces can be described by their components, which (depending on the application) as column vectors

or line vector

to be written. Often line vectors are marked with a superscript T for transposed , such as .

In the literature, vectors are distinguished from other quantities in different ways: lower case letters, lower case letters in bold, lower case letters underlined, lower case letters with an arrow above or small Fraktur letters are used. This article uses lowercase letters.

A matrix is ​​indicated by a “grid” of numbers. Here is a matrix with four rows and three columns:

Matrices are mostly designated with capital letters.

In the case of column vectors, individual elements of a vector are usually indicated by an index: The second element of the vector specified above would then be . A exponent is sometimes used in row vectors, whereby one has to be careful whether there is a vector indexing or an exponent : With the above example one has about . Matrix elements are indicated by two indices. The elements are represented by lowercase letters: is the element in the second line of the third column (instead of “in the third column of the second line”, because this makes it easier to read).

The generalized term for these structures is tensor , scalars are zeroth order tensors, vectors first order tensors, matrices second order tensors. A -th order tensor can be represented by a -dimensional number cube.

It is often necessary to bring matrices to a special shape by means of elementary line transformations or base changes . The triangular shape , the diagonal shape and the Jordanian normal shape are particularly important .

Endomorphisms and Square Matrices

In the representation of a linear mapping - as described under Matrix - there is the special case of a linear mapping of a finite-dimensional vector space onto itself (a so-called endomorphism ). The same basis can then be used for the original image and image coordinates and a square matrix is ​​obtained so that the application of the linear mapping corresponds to the left multiplication . To express the dependence on and , spellings like or are used . Executing this mapping twice in a row then corresponds to multiplication by etc., and all polynomial expressions with (sums of multiples of powers of ) can be understood as linear mappings of the vector space.

Invertibility

Analogous to the calculation rule for numbers, the zeroth power of a square matrix is ​​the diagonal matrix ( unit matrix ) with ones on the diagonal and in which all remaining elements are zero, it corresponds to the identity mapping of each vector to itself. Negative powers of a square matrix can only be calculated , if the linear mapping given by is invertible, i.e. no two different vectors and maps to the same vector . In other words, you have an invertible matrix from always follow the linear system thus may be just the solution have. For an invertible matrix there is an inverse matrix with .

Determinants

A determinant is a special function that assigns a number to a square matrix. This number gives information about some properties of the matrix. For example, it can be used to identify whether a matrix can be inverted. Another important application is the calculation of the characteristic polynomial and thus the eigenvalues ​​of the matrix.

There are closed formulas for calculating the determinants, such as Laplace's expansion theorem or Leibniz's formula . However, these formulas are more of theoretical value, as their effort increases significantly with larger matrices. In practice, the easiest way to calculate determinants is to use the Gaussian algorithm to convert the matrix into an upper or lower triangular shape; the determinant is then simply the product of the main diagonal elements .

example

The above terms should be clarified using an example motivated by the Fibonacci sequence .

Calculation of powers using diagonalization

The Fibonacci sequence is recursively defined by the equations , and for what is synonymous with

and

,

from which by iteration the non-recursive formula

follows, in which the -th power of a matrix occurs.

The behavior of such a matrix on exponentiation is not easy to see; on the other hand, the -th power of a diagonal matrix is simply calculated by exponentiating each individual diagonal entry. If there is an invertible matrix , so that it has diagonal form, the exponentiation of can be reduced to the exponentiation of a diagonal matrix according to the equation (the left side of this equation is then the -th power of a diagonal matrix). In general, its behavior (with exponentiation, but also with other operations) can be seen more easily by diagonalizing a matrix.

If one understands a matrix of a linear mapping , then the transformation matrix is the base change matrix to another base , i.e. (whereby the identity mapping maps each vector to itself). Then namely .

In the above example, a transformation matrix can be found such that

is a diagonal matrix in which the golden ratio occurs. From this we finally get Binet's formula :

Eigenvalues

How do you get from the matrix to the number ? One recognizes immediately from the diagonal matrix

,

which means it is a vector is equal to zero by multiplying the diagonal matrix component-wise multiplied: (more precisely comparable is -facht) . Because of this property, the number is called an eigenvalue of the matrix (with an eigenvector ). In the case of diagonal matrices, the eigenvalues ​​are equal to the diagonal entries.

is also the eigenvalue of the original matrix (with eigenvector , because ), the eigenvalues ​​therefore remain unchanged when the matrix is ​​transformed. The diagonal form of the matrix results from its eigenvalues, and in order to find the eigenvalues ​​of, one has to investigate for which numbers the linear system of equations has a solution other than zero (or, in other words, the matrix is not invertible).

The numbers we are looking for are precisely those that make the determinant of the matrix zero. This determinant is a polynomial expression in (the so-called characteristic polynomial of ); in the case of the 2 × 2 matrix mentioned above , this gives the quadratic equation with the two solutions and . The associated eigenvectors are solutions of the linear equation systems or , they then form the columns of the transformation matrix .

Diagonalisability

Whether a matrix can be diagonalized depends on the number range used. is not diagonalizable over the rational numbers , for example , because the eigenvalues and irrational numbers. The diagonalization can also fail regardless of the number range, if there are not “enough” eigenvalues; so has the Jordan form matrix

only the eigenvalue (as a solution to the quadratic equation ) and is not diagonalizable. With a sufficiently large number range (for example above the complex numbers ), however, each matrix can be diagonalized or transformed into Jordanian normal form .

Since the transformation of a matrix corresponds to the change in the basis of a linear mapping, this last statement means that for a linear mapping with a sufficiently large number range, one can always choose a base that is mapped “in a simple way”: In the case of diagonalization, each basis vector is a multiple of itself (is therefore an eigenvector); in the case of the Jordan form, to a multiple of itself plus possibly the previous basis vector. This theory of linear mapping can be generalized to bodies that are not "big enough"; in them, in addition to the Jordan form, other normal forms must be considered (for example the Frobenius normal form ).

literature

Wikibooks: Linear Algebra  - Learning and Teaching Materials

Web links

Commons : Linear Algebra  - collection of images, videos and audio files

Individual evidence

  1. ^ John Stillwell: Mathematics and Its History . Springer, New York, NY 2010, ISBN 978-1-4419-6052-8 , pp. 88-89 , doi : 10.1007 / 978-1-4419-6053-5_6 .
  2. ^ Heinz-Wilhelm Alten : 4000 years of algebra. History, cultures, people . Springer, Berlin a. a. 2003, ISBN 3-540-43554-9 , pp. 335-339 .