# Subspace

In three-dimensional Euclidean space, all the original planes and straight lines through the origin form sub-vector spaces.

A subspace , part vector space , linear subspace or linear subspace is in mathematics a subset of a vector space , which is a vector space itself again. The vector space operations vector addition and scalar multiplication are inherited from the initial space to the sub-vector space. Each vector space contains itself and the zero vector space as trivial sub-vector spaces.

Each subspace is the product of a linearly independent subset of vectors of the output space . The sum and the intersection of two sub-vector spaces results in a sub-vector space, the dimensions of which can be determined using the dimensional formula . Each subspace has at least one complementary space , so that the output space is the direct sum of the subspace and its complement. Furthermore, a factor space can be assigned to each sub-vector space , which is created in that all elements of the output space are projected in parallel along the sub-vector space .

In linear algebra , subspaces are used, among other things, to characterize the core and image of linear mappings , sets of solutions for linear equations and eigenspaces of eigenvalue problems . In the functional analysis , sub-vector spaces of Hilbert spaces , Banach spaces and dual spaces are examined in particular . Sub-vector spaces have diverse applications, for example in numerical solution methods for large linear systems of equations and for partial differential equations , in optimization problems , in coding theory and in signal processing .

## definition

If a vector space is over a body , then a subset forms a sub- vector space of if and only if it is not empty and closed with regard to vector addition and scalar multiplication . So it has to ${\ displaystyle (V, +, \ cdot)}$ ${\ displaystyle K}$${\ displaystyle U \ subseteq V}$${\ displaystyle V}$

• ${\ displaystyle U \ neq \ emptyset}$
• ${\ displaystyle u + w \ in U}$
• ${\ displaystyle \ alpha \ cdot u \ in U}$

hold for all vectors and all scalars . The vector addition and the scalar multiplication in the sub-vector space are the restrictions of the corresponding operations of the output space . ${\ displaystyle u, w \ in U}$${\ displaystyle \ alpha \ in K}$${\ displaystyle U}$${\ displaystyle V}$

Equivalent to the first condition, one can also demand that the zero vector of in be contained. If it contains at least one element, then the zero vector in is also included (set ) due to the closure of with regard to the scalar multiplication . Conversely , if the set contains the zero vector, it is not empty. ${\ displaystyle V}$${\ displaystyle U}$${\ displaystyle U}$${\ displaystyle U}$${\ displaystyle U}$${\ displaystyle \ alpha = 0}$${\ displaystyle U}$

With the help of these three criteria it can be checked whether a given subset of a vector space also forms a vector space without having to prove all vector space axioms. A subspace is often referred to as a “subspace” for short when it is clear from the context that it is a linear subspace and not a more general subspace . ${\ displaystyle U}$${\ displaystyle V}$

## Examples

The set of vectors for which applies forms a subspace of the Euclidean plane.${\ displaystyle (x, y)}$${\ displaystyle x = y}$

### Concrete examples

The set of all vectors of the real number plane forms a vector space with the usual component-wise vector addition and scalar multiplication. The subset of the vectors for which applies forms a subspace of , because it applies to all : ${\ displaystyle (x, y)}$${\ displaystyle V = \ mathbb {R} ^ {2}}$${\ displaystyle U}$${\ displaystyle x = y}$${\ displaystyle V}$${\ displaystyle a, b, c \ in \ mathbb {R}}$

• the coordinate origin is in${\ displaystyle (0,0)}$${\ displaystyle U}$
• ${\ displaystyle (a, a) + (b, b) = (a + b, a + b) \ in U}$
• ${\ displaystyle c \ cdot (a, a) = (c \ cdot a, c \ cdot a) \ in U}$

As a further example one can consider the vector space of all real functions with the usual pointwise addition and scalar multiplication. In this vector space the set of linear functions forms a sub-vector space, because it holds for : ${\ displaystyle V = \ mathbb {R} ^ {\ mathbb {R}}}$ ${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle U}$ ${\ displaystyle f (x) = ax + b}$${\ displaystyle a, b, c, d \ in \ mathbb {R}}$

• the null function lies in${\ displaystyle x \ mapsto 0x + 0}$${\ displaystyle U}$
• ${\ displaystyle f (x) + g (x) = (ax + b) + (cx + d) = (a + c) x + (b + d)}$, Consequently ${\ displaystyle f + g \ in U}$
• ${\ displaystyle c \ cdot f (x) = c \ cdot (ax + b) = (c \ cdot a) x + (c \ cdot b)}$, Consequently ${\ displaystyle c \ cdot f \ in U}$

### More general examples

• Each vector space contains itself and the zero vector space , which only consists of the zero vector, as trivial sub-vector spaces.${\ displaystyle \ {0 \}}$
• In the vector space of real numbers , the set and whole are the only subspaces.${\ displaystyle \ mathbb {R}}$${\ displaystyle \ {0 \}}$${\ displaystyle \ mathbb {R}}$
• In the vector space of complex numbers , the set of real numbers and the set of imaginary numbers are subspaces.${\ displaystyle \ mathbb {R}}$ ${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle i \ mathbb {R}}$
• In the Euclidean plane , all straight lines through the zero point form sub-vector spaces.${\ displaystyle \ mathbb {R} ^ {2}}$
• In Euclidean space , all straight lines through the origin and planes of origin form sub-vector spaces.${\ displaystyle \ mathbb {R} ^ {3}}$
• In the vector space of all polynomials , the set of polynomials of maximum degree forms a sub-vector space for every natural number .${\ displaystyle k}$${\ displaystyle k \ geq 0}$
• In the vector space of the square matrices , the symmetrical and the skew-symmetrical matrices each form sub-vector spaces.
• In the vector space of the real functions over an interval , the integrable functions , the continuous functions and the differentiable functions each form sub-vector spaces.
• In the vector space of all mappings between two vector spaces over the same body, the set of linear mappings forms a sub-vector space.

## properties

### Vector space axioms

The three subspace criteria are indeed sufficient and necessary for the validity of all vector space axioms. Because of the closed nature of the set, the following applies to all vectors by setting${\ displaystyle U}$${\ displaystyle u \ in U}$${\ displaystyle \ alpha = -1}$

${\ displaystyle (-1) \ cdot u = -u \ in U}$

and thus continue by setting ${\ displaystyle w = -u}$

${\ displaystyle 0 = uu \ in U}$.

The set thus contains in particular the zero vector and the additively inverse element for each element . So it is a subgroup of and therefore especially an Abelian group . The associative law , the commutative law , the distributive laws and the neutrality of one are transferred directly from the starting space . With that it satisfies all vector space axioms and is also a vector space. Conversely, every subspace must meet the three criteria given, since vector addition and scalar multiplication are the restrictions of the corresponding operations of . ${\ displaystyle U}$${\ displaystyle u}$ ${\ displaystyle -u}$${\ displaystyle (U, +)}$${\ displaystyle (V, +)}$${\ displaystyle V}$${\ displaystyle U}$${\ displaystyle (U, +, \ cdot)}$${\ displaystyle U}$${\ displaystyle V}$

### presentation

The linear hull of a vector in the Euclidean plane${\ displaystyle \ langle a \ rangle}$${\ displaystyle a}$

Each subset of vectors of a vector space spans by forming all possible linear combinations${\ displaystyle X = \ {v_ {1}, \ ldots, v_ {n} \}}$${\ displaystyle V}$

${\ displaystyle \ langle X \ rangle = \ operatorname {span} \ {v_ {1}, \ ldots, v_ {n} \} = \ {\ alpha _ {1} v_ {1} + \ ldots + \ alpha _ {n} v_ {n} \ mid \ alpha _ {1}, \ ldots, \ alpha _ {n} \ in K \}}$,

a subspace of on, which is called the linear envelope of . The linear hull is the smallest subspace that includes the set and is equal to the average of all subspaces of that include. Conversely, every subspace is the product of a subset of , that is, it holds ${\ displaystyle V}$${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle V}$${\ displaystyle X}$${\ displaystyle U}$${\ displaystyle X}$${\ displaystyle V}$

${\ displaystyle U = \ langle X \ rangle}$,

where the set is called a generating system of . A minimal generating system consists of linearly independent vectors and is called the basis of a vector space. The number of elements of a basis indicates the dimension of a vector space. ${\ displaystyle X}$${\ displaystyle U}$

## Operations

### Intersection and union

The intersection of two subspaces of a vector space${\ displaystyle U_ {1}, U_ {2}}$${\ displaystyle V}$

${\ displaystyle U_ {1} \ cap U_ {2} = \ {v \ in V \ mid v \ in U_ {1} {\ text {and}} v \ in U_ {2} \}}$

is always itself a subspace.

The union of two subspaces

${\ displaystyle U_ {1} \ cup U_ {2} = \ {v \ in V \ mid v \ in U_ {1} {\ text {or}} v \ in U_ {2} \}}$

however, it is only a subspace if or holds. Otherwise the union is complete with regard to the scalar multiplication, but not with regard to the vector addition. ${\ displaystyle U_ {1} \ subseteq U_ {2}}$${\ displaystyle U_ {2} \ subseteq U_ {1}}$

### total

The sum of two subspaces of a vector space${\ displaystyle U_ {1}, U_ {2}}$${\ displaystyle V}$

${\ displaystyle U_ {1} + U_ {2} = \ {u_ {1} + u_ {2} \ mid u_ {1} \ in U_ {1}, u_ {2} \ in U_ {2} \}}$

is again a subspace, namely the smallest subspace that contains. Which applies to the sum of two finite-dimensional subspaces dimension formula${\ displaystyle U_ {1} \ cup U_ {2}}$

${\ displaystyle \ dim \ left (U_ {1} + U_ {2} \ right) = \ dim U_ {1} + \ dim U_ {2} - \ dim \ left (U_ {1} \ cap U_ {2} \ right)}$,

from which, conversely, the dimension of the intersection of two sub-vector spaces can also be read. Section and sum bases of sub-vector spaces of finite dimensions can be calculated with the Zassenhaus algorithm .

### Direct sum

If the intersection of two sub-vector spaces consists only of the zero vector , then the sum is called an inner direct sum${\ displaystyle U_ {1}, U_ {2}}$${\ displaystyle U_ {1} \ cap U_ {2} = \ {0 \}}$

${\ displaystyle U_ {1} \ oplus U_ {2}}$,

because it is isomorphic to the outer direct sum of the two vector spaces. In this case there are uniquely certain vectors for each , with . It then follows from the set of dimensions that the zero vector space is zero-dimensional, for the dimension of the direct sum ${\ displaystyle u \ in U_ {1} \ oplus U_ {2}}$ ${\ displaystyle u_ {1} \ in U_ {1}}$${\ displaystyle u_ {2} \ in U_ {2}}$${\ displaystyle u = u_ {1} + u_ {2}}$

${\ displaystyle \ dim \ left (U_ {1} \ oplus U_ {2} \ right) = \ dim U_ {1} + \ dim U_ {2}}$,

which is also true in the infinite-dimensional case.

### Multiple operands

The preceding operations can also be generalized to more than two operands. If is a family of subspaces of , where is any index set , then these subspaces are averaged ${\ displaystyle (U_ {i}) _ {i \ in I}}$${\ displaystyle V}$${\ displaystyle I}$

${\ displaystyle \ bigcap _ {i \ in I} U_ {i} = \ left \ {v \ in V \ mid v \ in U_ {i} {\ text {for all}} i \ in I \ right \} }$

again a subspace of . The sum of several sub-vector spaces also results ${\ displaystyle V}$

${\ displaystyle \ sum _ {i \ in I} U_ {i} = \ left \ {\ sum _ {i \ in I} u_ {i} \ mid u_ {i} \ in U_ {i}, {\ text {almost all}} u_ {i} = 0 \ right \}}$

again a subspace of , whereby in the case of an index set with an infinite number of elements only a finite number of summands may not be equal to the zero vector. Such a sum is called direct and is then with ${\ displaystyle V}$

${\ displaystyle \ bigoplus _ {i \ in I} U_ {i}}$

denotes when the intersection of each sub-vector space with the sum of the remaining sub-vector spaces results in the zero vector space. This is equivalent to the fact that each vector has a unique representation as the sum of elements of the subspaces. ${\ displaystyle U_ {i}}$

## Derived spaces

### Complementary space

For each subspace of there is at least one complementary space such that ${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle W \ subseteq V}$

${\ displaystyle V = U \ oplus W}$

applies. Each such complementary space corresponds exactly to one projection onto the sub-vector space , i.e. an idempotent linear mapping with the ${\ displaystyle P}$${\ displaystyle U}$${\ displaystyle P \ colon V \ to V}$

${\ displaystyle V = P (V) \ oplus (\ operatorname {Id} -P) (V)}$

holds, where is the identical figure . In general, there are several complementary spaces to a sub-vector space, none of which is distinguished by the vector space structure. In scalar product spaces , however, it is possible to speak of mutually orthogonal sub-vector spaces. If finite dimensional, then for each sub-vector space there exists a uniquely determined orthogonal complementary space , which is precisely the orthogonal complement of , and it then applies ${\ displaystyle \ operatorname {Id}}$${\ displaystyle V}$${\ displaystyle U}$ ${\ displaystyle U ^ {\ perp}}$${\ displaystyle U}$

${\ displaystyle V = U \ oplus U ^ {\ perp}}$.

### Factor space

A factor space can be assigned to each sub-vector space of a vector space , which is created in that all elements of the sub-vector space are identified with one another and thus the elements of the vector space are projected in parallel along the sub-vector space . The factor space is formally defined as the set of equivalence classes${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle V / U}$

${\ displaystyle V / U = \ {\, [v] \ mid v \ in V \}}$

of vectors in , being the equivalence class of a vector ${\ displaystyle v \ in V}$

${\ displaystyle [v] = v + U = \ {v + u \ mid u \ in U \}}$

is the set of vectors in that differ by only one element of subspace . The factor space forms a vector space if the vector space operations are defined representative, but it is not itself a subspace of . The following applies to the dimension of the factor space ${\ displaystyle V}$${\ displaystyle v}$${\ displaystyle u}$${\ displaystyle U}$${\ displaystyle V}$

${\ displaystyle \ dim V = \ dim U + \ dim V / U}$.

The subspaces of are exactly the factor spaces , where subspace of is with . ${\ displaystyle V / U}$${\ displaystyle W / U}$${\ displaystyle W}$${\ displaystyle V}$${\ displaystyle U \ subseteq W \ subseteq V}$

### Annihilator room

The dual space of a vector space over a body is the space of the linear mappings from to and thus a vector space itself. For a subset of , the set of all functionals that vanish on forms a subspace of the dual space, the so-called annihilator space ${\ displaystyle V ^ {\ ast}}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle X}$${\ displaystyle V}$${\ displaystyle X}$

${\ displaystyle X ^ {0} = \ lbrace f \ in V ^ {\ ast} \ mid f (x) = 0 {\ mbox {for all}} x \ in X \ rbrace}$.

If is finite dimensional, then the dimension of the annihilator space is a subspace of${\ displaystyle V}$${\ displaystyle U}$${\ displaystyle V}$

${\ displaystyle \ dim V = \ dim U + \ dim U ^ {0}}$.

The dual space of a subspace is isomorphic to the factor space . ${\ displaystyle U ^ {\ ast}}$${\ displaystyle U}$${\ displaystyle V ^ {\ ast} / U ^ {0}}$

## Subspaces in linear algebra

### Linear maps

If there is a linear mapping between two vector spaces and over the same body, then forms the core of the mapping ${\ displaystyle T \ colon V \ to W}$${\ displaystyle V}$${\ displaystyle W}$

${\ displaystyle \ operatorname {ker} T = T ^ {- 1} (\ {0 \}) = \ {v \ in V \ mid T (v) = 0 \}}$

a subspace of and the image of the mapping ${\ displaystyle V}$

${\ displaystyle \ operatorname {im} T = T (V) = \ {T (v) \ mid v \ in V \}}$

a subspace of . Furthermore, the graph of a linear mapping is a subspace of the product space . If the vector space is finite-dimensional, then the rank theorem applies to the dimensions of the spaces involved${\ displaystyle W}$ ${\ displaystyle V \ times W}$${\ displaystyle V}$

${\ displaystyle \ dim V = \ dim (\ operatorname {im} T) + \ dim (\ operatorname {ker} T)}$.

The dimension of the image is also called rank and the dimension of the core is also called the defect of the linear mapping. According to the homomorphism theorem , the image is isomorphic to the factor space . ${\ displaystyle V / \ operatorname {ker} T}$

### Linear equations

If again a linear mapping between two vector spaces over the same field is the solution set of the homogeneous linear equation ${\ displaystyle T \ colon V \ to W}$

${\ displaystyle T (v) = 0}$

a subspace of , specifically the kernel of . The solution set of an inhomogeneous linear equation ${\ displaystyle V}$${\ displaystyle T}$

${\ displaystyle T (v) = b}$

on the other hand, if it is not empty, with is an affine-linear subspace of , which is a consequence of the superposition property . The dimension of the solution space is then also equal to the dimension of the core of . ${\ displaystyle b \ neq 0}$${\ displaystyle V}$${\ displaystyle T}$

### Eigenvalue problems

Is now a linear mapping of a vector space in itself, i.e. an endomorphism , with an associated eigenvalue problem ${\ displaystyle T \ colon V \ to V}$

${\ displaystyle T (v) = \ lambda \ cdot v}$,

then everyone is eigenspace belonging to an eigenvalue ${\ displaystyle \ lambda}$

${\ displaystyle \ operatorname {Eig} (\ lambda) = \ left \ {v \ in V \ mid T (v) = \ lambda \ cdot v \ right \}}$

a subspace of whose elements different from the zero vector are exactly the associated eigenvectors . The dimension of the eigen-space corresponds to the geometric multiplicity of the eigen-value; it is at most as large as the algebraic multiplicity of the eigenvalue. ${\ displaystyle V}$${\ displaystyle v}$

### Invariant subspaces

If there is an endomorphism again , then a subspace of invariant is called under or -invariant for short , if ${\ displaystyle T \ colon V \ to V}$${\ displaystyle U}$${\ displaystyle V}$ ${\ displaystyle T}$${\ displaystyle T}$

${\ displaystyle T (U) \ subseteq U}$

applies, that is, if the picture is also in for everyone . The image from below is then a subspace of . The trivial subspaces and , but also , and all eigenspaces of are always invariant under . Another important example of invariant subspaces are the main spaces , which are used, for example, to determine the Jordanian normal form . ${\ displaystyle u \ in U}$${\ displaystyle T (u)}$${\ displaystyle U}$${\ displaystyle U}$${\ displaystyle T}$${\ displaystyle U}$${\ displaystyle \ {0 \}}$${\ displaystyle V}$${\ displaystyle \ operatorname {ker} T}$${\ displaystyle \ operatorname {im} T}$${\ displaystyle T}$${\ displaystyle T}$

## Sub-vector spaces in functional analysis

### Unterhilbert dreams

In Hilbert spaces, so full Skalarprodukträumen, in particular are under Hilbert spaces considered, ie subspaces that the limitation of respect scalar are still complete. This property is synonymous with the fact that the subspace is closed with respect to the norm topology that is induced by the scalar product. Not every sub-vector space of a Hilbert space is complete, but a sub-Hilbert space can be obtained for every incomplete sub-vector space by closing it , in which it is then dense . According to the projection theorem, there is also a clearly defined orthogonal complement for every Unterhilbert space , which is always closed.

Unterhilbert spaces play an important role in quantum mechanics and the Fourier or multi-scale analysis of signals.

### Underbench rooms

In Banach spaces, i.e. complete normalized spaces , one can analogously consider sub- Banach spaces, i.e. sub-vector spaces that are complete with regard to the restriction of the norm . As in the Hilbert space case, a subspace of a Banach space is a sub-Banach space if and only if it is closed. Furthermore, for every incomplete sub-vector space of a Banach space, a sub-Banach space can be obtained by completion, which lies close to it. However, there is generally no complementary sub-annex space to a sub-annex space.

In a half- normalized space , the vectors with half-normal zero form a sub-vector space. A normalized space as a factor space is obtained from a semi-normalized space by considering equivalence classes of vectors that do not differ with regard to the semi-norm. If the semi-normalized space is complete, this factor space is then a Banach space. This construction is used in particular in L p spaces and related function spaces .

In the numerical calculation of partial differential equations using the finite element method , the solution is approximated in suitable finite-dimensional sub- appendix spaces of the underlying Sobolev space .

### Topological dual spaces

In functional analysis, in addition to the algebraic dual space, one also considers the topological dual space of a vector space , which consists of the continuous linear mappings from to . For a topological vector space , the topological dual space forms a sub-vector space of the algebraic dual space. According to Hahn-Banach's theorem , a linear functional on a subspace of a real or complex vector space that is bounded by a sublinear function has a linear continuation on the total space that is also bounded by this sublinear function. As a consequence, the topological dual space of a normalized space contains a sufficient number of functionals, which forms the basis of a rich duality theory. ${\ displaystyle V '}$${\ displaystyle V}$${\ displaystyle V}$${\ displaystyle K}$

## Other uses

Other important applications of subspaces are: