# Linear independence

Linear independent vectors in ℝ 3
Linearly dependent vectors in a plane in ℝ 3

In linear algebra , a family of vectors in a vector space is called linearly independent if the zero vector can only be generated by a linear combination of the vectors in which all coefficients of the combination are set to the value zero. Equivalent to this is (provided the family does not only consist of the zero vector) that none of the vectors can be represented as a linear combination of the other vectors in the family.

Otherwise they are called linearly dependent . In this case, at least one of the vectors (but not necessarily all of them) can be represented as a linear combination of the others.

For example, in three-dimensional Euclidean space, the vectors , and are linearly independent. The vectors , and are linearly dependent, because the third vector is the sum of the first two, i.e. H. the difference between the sum of the first two and the third is the zero vector. The vectors , and are also linearly dependent because of ; however, the third vector cannot be represented here as a linear combination of the other two. ${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle (1,0,0)}$${\ displaystyle (0,1,0)}$${\ displaystyle (0,0,1)}$${\ displaystyle (2, {- 1}, 1)}$${\ displaystyle (1,0,1)}$${\ displaystyle (3, {- 1}, 2)}$${\ displaystyle (1,2, {- 3})}$${\ displaystyle ({-2}, {- 4}, 6)}$${\ displaystyle (1,1,1)}$${\ displaystyle 2 \ cdot (1,2, {- 3}) + ({- 2}, {- 4}, 6) = (0,0,0)}$

## definition

Let it be a vector space over the body and an index set . A family indexed by is called linearly independent if every finite subfamily contained therein is linearly independent. ${\ displaystyle V}$ ${\ displaystyle K}$${\ displaystyle I}$${\ displaystyle I}$${\ displaystyle ({\ vec {v}} _ {i}) _ {i \ in I}}$

A finite family of vectors is called linearly independent if the only possible representation of the zero vector is a linear combination ${\ displaystyle {\ vec {v}} _ {1}, {\ vec {v}} _ {2}, \ dots, {\ vec {v}} _ {n}}$${\ displaystyle V}$

${\ displaystyle a_ {1} {\ vec {v}} _ {1} + a_ {2} {\ vec {v}} _ {2} + \ dotsb + a_ {n} {\ vec {v}} _ {n} = {\ vec {0}}}$

with coefficients from the basic body is the one for which all coefficients are equal to zero. On the other hand, if the zero vector can also be generated non-trivially (with coefficients not equal to zero), then the vectors are linearly dependent . ${\ displaystyle a_ {1}, a_ {2}, \ dots, a_ {n}}$${\ displaystyle K}$${\ displaystyle a_ {i}}$

The family is linearly dependent if and only if there is a finite subset , as well as coefficients , of which at least one is not equal to 0, so that ${\ displaystyle ({\ vec {v}} _ {i}) _ {i \ in I}}$${\ displaystyle J \ subseteq I}$${\ displaystyle (a_ {j}) _ {j \ in J}}$

${\ displaystyle \ sum _ {j \ in J} a_ {j} {\ vec {v}} _ {j} = {\ vec {0}}.}$

The zero vector is an element of the vector space . In contrast, 0 is an element of the body . ${\ displaystyle {\ vec {0}}}$${\ displaystyle V}$${\ displaystyle K}$

The term is also used for subsets of a vector space: A subset of a vector space is called linearly independent if every finite linear combination of pairs of different vectors can only represent the zero vector if all coefficients in this linear combination have the value zero. Note the following difference: If, for example, a linearly independent family is, then it is obviously a linearly dependent family. The set is then linearly independent. ${\ displaystyle S \ subseteq V}$${\ displaystyle V}$${\ displaystyle S}$${\ displaystyle ({\ vec {v}} _ {1}, {\ vec {v}} _ {2})}$${\ displaystyle ({\ vec {v}} _ {1}, {\ vec {v}} _ {1}, {\ vec {v}} _ {2})}$${\ displaystyle \ {{\ vec {v}} _ {1}, {\ vec {v}} _ {1}, {\ vec {v}} _ {2} \} = \ {{\ vec {v }} _ {1}, {\ vec {v}} _ {2} \}}$

## Other characterizations and simple properties

• The vectors are (if not and ) linearly independent if and only if none of them can be represented as a linear combination of the others. This statement does not apply in the more general context of modules about rings .${\ displaystyle {\ vec {v}} _ {1}, \ ldots, {\ vec {v}} _ {n}}$${\ displaystyle n = 1}$${\ displaystyle {\ vec {v}} _ {1} = {\ vec {0}}}$
• A variant of this statement is the dependency lemma : If linearly independent and linearly dependent, then can be written as a linear combination of .${\ displaystyle {\ vec {v}} _ {1}, \ ldots, {\ vec {v}} _ {n}}$${\ displaystyle {\ vec {v}} _ {1}, \ ldots, {\ vec {v}} _ {n}, {\ vec {w}}}$${\ displaystyle {\ vec {w}}}$${\ displaystyle {\ vec {v}} _ {1}, \ ldots, {\ vec {v}} _ {n}}$
• If a family of vectors is linearly independent, then every subfamily of this family is also linearly independent. If, on the other hand, a family is linearly dependent, then every family that includes this dependent family is also linearly dependent.
• If the zero vector is one of the (here: Let ), these are linearly dependent - the zero vector can be generated by setting all of them with the exception of which coefficient of the zero vector may be arbitrary (i.e. in particular not equal to zero).${\ displaystyle {\ vec {v}} _ {i}}$${\ displaystyle {\ vec {v}} _ {j} = {\ vec {0}}}$${\ displaystyle a_ {i} = 0}$${\ displaystyle a_ {j}}$${\ displaystyle {\ vec {v}} _ {j}}$
• In a - dimensional space, a family of more than vectors is always linearly dependent (see limit lemma ).${\ displaystyle d}$${\ displaystyle d}$

## Determination by means of determinant

If you have given vectors of a -dimensional vector space as row or column vectors with respect to a fixed basis, you can check their linear independence by combining these row or column vectors into a matrix and then calculating their determinant . The vectors are linearly independent if and only if the determinant is not equal to 0. ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle n \ times n}$

## Basis of a vector space

The concept of linearly independent vectors plays an important role in the definition and handling of vector space bases. A basis of a vector space is a linearly independent generating system . Bases make it possible to calculate with coordinates, especially for finite-dimensional vector spaces.

## Examples

### Single vector

Let the vector be an element of the vector space over . Then the individual vector is linearly independent if and only if it is not the zero vector. ${\ displaystyle {\ vec {v}}}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle \ mathbf {v}}$

Because from the definition of the vector space it follows that if

${\ displaystyle a \, \ mathbf {v} = 0}$with ,${\ displaystyle a \ in K}$${\ displaystyle \ mathbf {v} \ in V}$

only or can be! ${\ displaystyle a = 0}$${\ displaystyle \ mathbf {v} = \ mathbf {0}}$

### Vectors in the plane

The vectors and are linearly independent. ${\ displaystyle \ mathbf {u} = {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}}}$${\ displaystyle \ mathbf {v} = {\ begin {pmatrix} -3 \\ 2 \ end {pmatrix}}}$${\ displaystyle \ mathbb {R} ^ {2}}$

Proof: For valid ${\ displaystyle a, b \ in \ mathbb {R}}$

${\ displaystyle a \, \ mathbf {u} + b \, \ mathbf {v} = \ mathbf {0},}$

d. H.

${\ displaystyle a \, {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}} + b \, {\ begin {pmatrix} -3 \\ 2 \ end {pmatrix}} = {\ begin {pmatrix} 0 \\ 0 \ end {pmatrix}}}$

Then applies

${\ displaystyle {\ begin {pmatrix} a-3b \\ a + 2b \ end {pmatrix}} = {\ begin {pmatrix} 0 \\ 0 \ end {pmatrix}},}$

so

${\ displaystyle a-3b = 0 \ \ wedge \ a + 2b = 0.}$

This system of equations is only for the solution , (the so-called trivial fulfilled solution); d. H. and are linearly independent. ${\ displaystyle a = 0}$${\ displaystyle b = 0}$${\ displaystyle u}$${\ displaystyle v}$

### Standard basis in n-dimensional space

In vector space consider the following elements (the natural or standard basis of ): ${\ displaystyle V = \ mathbb {R} ^ {n}}$${\ displaystyle V}$

${\ displaystyle \ mathbf {e} _ {1} = (1,0,0, \ dots, 0)}$
${\ displaystyle \ mathbf {e} _ {2} = (0,1,0, \ dots, 0)}$
${\ displaystyle \ dots}$
${\ displaystyle \ mathbf {e} _ {n} = (0,0,0, \ dots, 1)}$

Then the vector family with is linearly independent. ${\ displaystyle (\ mathbf {e} _ {i}) _ {i \ in I}}$${\ displaystyle I = \ {1,2, \ dots, n \}}$

Proof: For valid ${\ displaystyle a_ {1}, a_ {2}, \ dots, a_ {n} \ in \ mathbb {R}}$

${\ displaystyle a_ {1} \, \ mathbf {e} _ {1} + a_ {2} \, \ mathbf {e} _ {2} + \ dotsb + a_ {n} \, \ mathbf {e} _ {n} = \ mathbf {0}.}$

But then also applies

${\ displaystyle a_ {1} \, \ mathbf {e} _ {1} + a_ {2} \, \ mathbf {e} _ {2} + \ dots + a_ {n} \, \ mathbf {e} _ {n} = (a_ {1}, a_ {2}, \ \ dots, a_ {n}) = \ mathbf {0},}$

and it follows that for all . ${\ displaystyle a_ {i} = 0}$${\ displaystyle i \ in \ {1,2, \ dots, n \}}$

### Functions as vectors

Let be the vector space of all functions . The two functions and in are linearly independent. ${\ displaystyle V}$${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle \ mathrm {e} ^ {t}}$${\ displaystyle \ mathrm {e} ^ {2t}}$${\ displaystyle V}$

Proof: it is and it applies ${\ displaystyle a, b \ in \ mathbb {R}}$

${\ displaystyle a \, \ mathrm {e} ^ {t} + b \, \ mathrm {e} ^ {2t} = 0}$

for everyone . If you derive this equation , you get a second equation ${\ displaystyle t \ in \ mathbb {R}}$${\ displaystyle t}$

${\ displaystyle a \, \ mathrm {e} ^ {t} + 2b \, \ mathrm {e} ^ {2t} = 0}$

By subtracting the first from the second equation, we get

${\ displaystyle b \, \ mathrm {e} ^ {2t} = 0}$

Since this equation has to apply to all and thus in particular also to , it follows by inserting that that has to be. If you put the calculated in this way back into the first equation, the result is ${\ displaystyle t}$${\ displaystyle t = 0}$${\ displaystyle t = 0}$${\ displaystyle b = 0}$${\ displaystyle b}$

${\ displaystyle a \, \ mathrm {e} ^ {t} + 0 = 0}$

From this it follows again that (for ) must be. ${\ displaystyle t = 0}$${\ displaystyle a = 0}$

Since the first equation is only solvable for and , the two functions and are linearly independent. ${\ displaystyle a = 0}$${\ displaystyle b = 0}$${\ displaystyle \ mathrm {e} ^ {t}}$${\ displaystyle \ mathrm {e} ^ {2t}}$

### Rows

Let be the vector space of all real-valued continuous functions on the open unit interval. Then it is true ${\ displaystyle V}$${\ displaystyle f \ colon (0,1) \ to \ mathbb {R}}$

${\ displaystyle {\ frac {1} {1-x}} = \ sum _ {n = 0} ^ {\ infty} x ^ {n},}$

but still are linearly independent. This is because linear combinations of powers of are only polynomials and not general power series, ie in particular restricted in the vicinity of 1, so that they can not be represented as a linear combination of powers. ${\ displaystyle {\ tfrac {1} {1-x}}, 1, x, x ^ {2}, \ ldots}$${\ displaystyle x}$${\ displaystyle {\ tfrac {1} {1-x}}}$

### Rows and columns of a matrix

Another interesting question is whether the rows of a matrix are linearly independent or not. The lines are treated as vectors. If the rows of a square matrix are linearly independent, the matrix is ​​called regular , otherwise singular . The columns of a square matrix are linearly independent if and only if the rows are linearly independent. Example of a sequence of regular matrices: Hilbert matrix .

### Rational independence

Real numbers that are linearly independent over the rational numbers as coefficients are called rationally independent or incommensurable . The numbers are therefore rationally independent or incommensurable, whereas the numbers are rationally dependent. ${\ displaystyle \ lbrace 1, \, {\ tfrac {1} {\ sqrt {2}}} \ rbrace}$${\ displaystyle \ lbrace 1, \, {\ tfrac {1} {\ sqrt {2}}}, 1 + {\ sqrt {2}} \ rbrace}$

## Generalizations

The definition of linearly independent vectors can be applied analogously to elements of a module . In this context, linearly independent families are also called free (see also: free module ).

The concept of linear independence can be further generalized to a consideration of independent sets, see Matroid .