Linear independence

from Wikipedia, the free encyclopedia
Linear independent vectors in ℝ 3
Linearly dependent vectors in a plane in ℝ 3

In linear algebra , a family of vectors in a vector space is called linearly independent if the zero vector can only be generated by a linear combination of the vectors in which all coefficients of the combination are set to the value zero. Equivalent to this is (provided the family does not only consist of the zero vector) that none of the vectors can be represented as a linear combination of the other vectors in the family.

Otherwise they are called linearly dependent . In this case, at least one of the vectors (but not necessarily all of them) can be represented as a linear combination of the others.

For example, in three-dimensional Euclidean space, the vectors , and are linearly independent. The vectors , and are linearly dependent, because the third vector is the sum of the first two, i.e. H. the difference between the sum of the first two and the third is the zero vector. The vectors , and are also linearly dependent because of ; however, the third vector cannot be represented here as a linear combination of the other two.

definition

Let it be a vector space over the body and an index set . A family indexed by is called linearly independent if every finite subfamily contained therein is linearly independent.

A finite family of vectors is called linearly independent if the only possible representation of the zero vector is a linear combination

with coefficients from the basic body is the one for which all coefficients are equal to zero. On the other hand, if the zero vector can also be generated non-trivially (with coefficients not equal to zero), then the vectors are linearly dependent .

The family is linearly dependent if and only if there is a finite subset , as well as coefficients , of which at least one is not equal to 0, so that

The zero vector is an element of the vector space . In contrast, 0 is an element of the body .

The term is also used for subsets of a vector space: A subset of a vector space is called linearly independent if every finite linear combination of pairs of different vectors can only represent the zero vector if all coefficients in this linear combination have the value zero. Note the following difference: If, for example, a linearly independent family is, then it is obviously a linearly dependent family. The set is then linearly independent.

Other characterizations and simple properties

  • The vectors are (if not and ) linearly independent if and only if none of them can be represented as a linear combination of the others. This statement does not apply in the more general context of modules about rings .
  • A variant of this statement is the dependency lemma : If linearly independent and linearly dependent, then can be written as a linear combination of .
  • If a family of vectors is linearly independent, then every subfamily of this family is also linearly independent. If, on the other hand, a family is linearly dependent, then every family that includes this dependent family is also linearly dependent.
  • If the zero vector is one of the (here: Let ), these are linearly dependent - the zero vector can be generated by setting all of them with the exception of which coefficient of the zero vector may be arbitrary (i.e. in particular not equal to zero).
  • In a - dimensional space, a family of more than vectors is always linearly dependent (see limit lemma ).

Determination by means of determinant

If you have given vectors of a -dimensional vector space as row or column vectors with respect to a fixed basis, you can check their linear independence by combining these row or column vectors into a matrix and then calculating their determinant . The vectors are linearly independent if and only if the determinant is not equal to 0.

Basis of a vector space

The concept of linearly independent vectors plays an important role in the definition and handling of vector space bases. A basis of a vector space is a linearly independent generating system . Bases make it possible to calculate with coordinates, especially for finite-dimensional vector spaces.

Examples

Single vector

Let the vector be an element of the vector space over . Then the individual vector is linearly independent if and only if it is not the zero vector.

Because from the definition of the vector space it follows that if

with ,

only or can be!

Vectors in the plane

The vectors and are linearly independent.

Proof: For valid

d. H.


Then applies

so

This system of equations is only for the solution , (the so-called trivial fulfilled solution); d. H. and are linearly independent.

Standard basis in n-dimensional space

In vector space consider the following elements (the natural or standard basis of ):

Then the vector family with is linearly independent.

Proof: For valid

But then also applies

and it follows that for all .

Functions as vectors

Let be the vector space of all functions . The two functions and in are linearly independent.

Proof: it is and it applies

for everyone . If you derive this equation , you get a second equation

By subtracting the first from the second equation, we get

Since this equation has to apply to all and thus in particular also to , it follows by inserting that that has to be. If you put the calculated in this way back into the first equation, the result is

From this it follows again that (for ) must be.

Since the first equation is only solvable for and , the two functions and are linearly independent.

Rows

Let be the vector space of all real-valued continuous functions on the open unit interval. Then it is true

but still are linearly independent. This is because linear combinations of powers of are only polynomials and not general power series, ie in particular restricted in the vicinity of 1, so that they can not be represented as a linear combination of powers.

Rows and columns of a matrix

Another interesting question is whether the rows of a matrix are linearly independent or not. The lines are treated as vectors. If the rows of a square matrix are linearly independent, the matrix is ​​called regular , otherwise singular . The columns of a square matrix are linearly independent if and only if the rows are linearly independent. Example of a sequence of regular matrices: Hilbert matrix .

Rational independence

Real numbers that are linearly independent over the rational numbers as coefficients are called rationally independent or incommensurable . The numbers are therefore rationally independent or incommensurable, whereas the numbers are rationally dependent.

Generalizations

The definition of linearly independent vectors can be applied analogously to elements of a module . In this context, linearly independent families are also called free (see also: free module ).

The concept of linear independence can be further generalized to a consideration of independent sets, see Matroid .

literature