# Vector space

Vector addition and multiplication with scalars : A vector v (blue) is added to another vector w (red, bottom). Above, w is stretched by a factor of 2, the result is the sum v + 2 · w.

A vector space or linear space is an algebraic structure that is used in many branches of mathematics . Vector spaces are the central subject of investigation in linear algebra . The elements of a vector space are called vectors. They can be added or multiplied by scalars (numbers), the result is again a vector of the same vector space. The term arose when these properties were abstracted based on vectors of Euclidean space so that they can then be transferred to more abstract objects such as functions or matrices .

The scalars with which a vector can be multiplied come from a body . Therefore a vector space is always a vector space over a certain body. Very often it is the field of real numbers or the field of complex numbers . One then speaks of a real vector space or a complex vector space . ${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {C}}$

A basis of a vector space is a set of vectors that allow each vector to be represented by unique coordinates . The number of basis vectors in a basis is called the dimension of the vector space. It is independent of the choice of basis and can also be infinite. The structural properties of a vector space are uniquely determined by the body over which it is defined and its dimensions.

A base makes it possible to perform calculations with vectors using their coordinates instead of the vectors themselves, which makes some applications easier.

## definition

Let there be a set, a field , an inner two-digit combination , called vector addition, and an outer two-digit combination , called a scalar multiplication . One then calls a vector space over the body or, for short, a vector space, if all and the following properties hold: ${\ displaystyle V}$${\ displaystyle (K, +, \ cdot)}$${\ displaystyle \ oplus \ colon V \ times V \ to V}$${\ displaystyle \ odot \ colon K \ times V \ to V}$${\ displaystyle (V, \ oplus, \ odot)}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle u, v, w \ in V}$${\ displaystyle \ alpha, \ beta \ in K}$

V1: ( associative law )${\ displaystyle u \ oplus (v \ oplus w) = (u \ oplus v) \ oplus w}$
V2: existence of a neutral element with${\ displaystyle 0_ {V} \ in V}$${\ displaystyle v \ oplus 0_ {V} = 0_ {V} \ oplus v = v}$
V3: Existence of an element that is too inverse with${\ displaystyle v \ in V}$ ${\ displaystyle -v \ in V}$${\ displaystyle v \ oplus (-v) = (- v) \ oplus v = 0_ {V}}$
V4: ( commutative law )${\ displaystyle v \ oplus u = u \ oplus v}$

Scalar multiplication:

S1: ( distributive law )${\ displaystyle \ alpha \ odot (u \ oplus v) = (\ alpha \ odot u) \ oplus (\ alpha \ odot v)}$
S2: ${\ displaystyle (\ alpha + \ beta) \ odot v = (\ alpha \ odot v) \ oplus (\ beta \ odot v)}$
S3: ${\ displaystyle (\ alpha \ cdot \ beta) \ odot v = \ alpha \ odot (\ beta \ odot v)}$
S4: neutrality of the single element , that is${\ displaystyle 1 \ in K}$${\ displaystyle 1 \ odot v = v}$

Remarks

• The axioms V1, V2 and V3 of vector addition state that a group forms, and axiom V4 that this group is Abelian . Their neutral element is called the zero vector .${\ displaystyle (V, \ oplus)}$${\ displaystyle 0_ {V}}$
• A body is an Abelian group with a neutral element (zero element) and a second inner two-digit link so that it is also an Abelian group and the distributive laws apply. Important examples of solids are the real numbers and the complex numbers .${\ displaystyle (K, +, \ cdot)}$${\ displaystyle (K, +)}$${\ displaystyle 0_ {K} \ in K}$${\ displaystyle \ cdot,}$${\ displaystyle (K \ setminus \ {0_ {K} \}, \ cdot)}$ ${\ displaystyle \ mathbb {R}}$ ${\ displaystyle \ mathbb {C}}$
• The axioms S1 and S2 of the scalar multiplication are also referred to as distributive laws, axiom S3 also as the associative law . It should be noted, however, that in axiom S2 the plus signs denote two different additions (on the left the in and on the right those in ) and that in axiom S3 the scalar multiplication is associative with the multiplication in .${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle K}$
• The axioms S1 and S2 guarantee the left compatibility with the vector addition and the right compatibility with the body and vector addition for the scalar multiplication . Axioms S3 and S4 also ensure that the body's multiplicative group operates on .${\ displaystyle (K \ setminus \ {0_ {K} \}, \ cdot)}$${\ displaystyle V}$
• In this article, as is usual in mathematics, both addition in the body and addition in vector space are denoted by the same sign , although they are different links. For is written. Both the multiplication in the body and the scalar multiplication between body element and vector space element are also referred to as. With both multiplications it is also common to leave out the painting point. Because the axioms mentioned above apply in vector spaces, there is no danger in practice of confusing the two additions or the two multiplications. In addition, you can distinguish the link between the elements to be added or multiplied. The use of the same symbols makes the vector space axioms particularly suggestive. For example, Axiom S1 is written as and Axiom S3 is written as .${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle +}$${\ displaystyle u \ oplus (-v)}$${\ displaystyle uv}$${\ displaystyle \ cdot}$${\ displaystyle \ alpha \ cdot (u + v) = \ alpha \ cdot u + \ alpha \ cdot v}$${\ displaystyle (\ alpha \ cdot \ beta) \ cdot v = \ alpha \ cdot (\ beta \ cdot v)}$
• With the two sets of carriers and , vector spaces are examples of heterogeneous algebras .${\ displaystyle V}$${\ displaystyle K}$
• A vector space over the field of complex or real numbers is called a complex or real vector space .

## First properties

The following statements apply to all and : ${\ displaystyle \ alpha \ in K}$${\ displaystyle v, w \ in V}$

• ${\ displaystyle (- \ alpha) \ cdot v = - (\ alpha \ cdot v) = \ alpha \ cdot (-v)}$.
• ${\ displaystyle \ alpha \ cdot v = 0_ {V} \ quad \ Leftrightarrow \ quad \ alpha = 0_ {K} {\ text {or}} v = 0_ {V}}$.
• The equation is uniquely solvable for everyone ; the solution is .${\ displaystyle v + x = w}$${\ displaystyle v, w \ in V}$${\ displaystyle x = w + (- v)}$

## Examples

### Euclidean plane

A clear vector space is the two-dimensional Euclidean plane (in right-angled Cartesian coordinate systems ) with the arrow classes (displacements or translations) as vectors and the real numbers as scalars. ${\ displaystyle \ mathbb {R} ^ {2}}$

${\ displaystyle {\ vec {v}} = (2,3)}$ is the shift by 2 units to the right and 3 units up,
${\ displaystyle {\ vec {w}} = (3, -5)}$ the shift 3 units to the right and 5 units down.

The sum of two displacements is again a displacement, namely the displacement that is obtained by performing the two displacements one after the other:

${\ displaystyle {\ vec {v}} + {\ vec {w}} = (5, -2)}$, d. H. the shift 5 units to the right and 2 units down.

The zero vector corresponds to the displacement that leaves all points in place, i.e. H. the identical figure. ${\ displaystyle {\ vec {0}} = (0,0)}$

By stretching the shift with a scalar from the set of real numbers, we get three times the shift: ${\ displaystyle {\ vec {v}}}$${\ displaystyle a = 3}$

${\ displaystyle a \ cdot {\ vec {v}} = 3 \ cdot (2,3) = (6,9)}$.

### Coordinate space

If a field and a natural number, then the -fold is the Cartesian product${\ displaystyle K}$${\ displaystyle n}$${\ displaystyle n}$

${\ displaystyle K ^ {n} = \ {(v_ {1}, \ dots, v_ {n}) \ mid v_ {1}, \ dots, v_ {n} \ in K \},}$

the set of all - tuples with entries in , a vector space over . The addition and the scalar multiplication are defined component-wise; for , one sets: ${\ displaystyle n}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle u = (u_ {1}, u_ {2}, \ dots, u_ {n}), v = (v_ {1}, v_ {2}, \ dots, v_ {n}) \ in K ^ {n}}$${\ displaystyle \ alpha \ in K}$

${\ displaystyle u + v = (u_ {1}, u_ {2}, \ dots, u_ {n}) + (v_ {1}, v_ {2}, \ dots, v_ {n}) = (u_ { 1} + v_ {1}, u_ {2} + v_ {2}, \ dots, u_ {n} + v_ {n})}$

and

${\ displaystyle \ alpha \ cdot v = \ alpha \ cdot (v_ {1}, v_ {2}, \ dots, v_ {n}) = (\ alpha \, v_ {1}, \ alpha \, v_ {2 }, \ dots, \ alpha \, v_ {n}).}$

The tuples are often noted as column vectors , that is, their entries are written one below the other. The vector spaces form the standard examples of finite-dimensional vector spaces. Every -dimensional -vector space is isomorphic to the vector space . With the help of a basis, each element of a vector space can be represented uniquely by an element of as a coordinate tuple. ${\ displaystyle n}$${\ displaystyle K ^ {n}}$${\ displaystyle n}$${\ displaystyle K}$${\ displaystyle K ^ {n}}$${\ displaystyle K ^ {n}}$

### Function rooms

#### Basics and definition

Example of addition for functions: The sum of the sine function and the exponential function is with${\ displaystyle \ sin + \ exp: \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle (\ sin + \ exp) (x) = \ sin (x) + \ exp (x)}$

If a field, a vector space and an arbitrary set, an addition and a scalar multiplication can be defined point by point on the set of all functions : For and the functions and are defined by ${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle A}$${\ displaystyle F (A, V)}$ ${\ displaystyle f \ colon A \ to V}$${\ displaystyle f, g \ in F (A, V)}$${\ displaystyle \ alpha \ in K}$${\ displaystyle f + g \ in F (A, V)}$${\ displaystyle \ alpha \ cdot f \ in F (A, V)}$

${\ displaystyle (f + g) (x) = f (x) + g (x)}$for everyone and${\ displaystyle x \ in A}$
${\ displaystyle (\ alpha \ cdot f) (x) = \ alpha \ cdot f (x)}$for everyone .${\ displaystyle x \ in A}$

With this addition and scalar multiplication there is a vector space. This applies in particular to when the body itself is chosen as the target area . Further examples of vector spaces are obtained as sub-vector spaces of these function spaces . ${\ displaystyle F (A, V)}$${\ displaystyle K}$${\ displaystyle F (A, K)}$${\ displaystyle K}$

In many applications , the field of real numbers, or the field of complex numbers, and is a subset of , , or . Examples include the vector space of all functions of by and the sub-spaces of all continuous functions and of all times continuously differentiable functions of after . ${\ displaystyle K = \ mathbb {R}}$${\ displaystyle K = \ mathbb {C}}$${\ displaystyle A}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle C ^ {0} (\ mathbb {R}, \ mathbb {R})}$${\ displaystyle C ^ {k} (\ mathbb {R}, \ mathbb {R})}$${\ displaystyle k}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R}}$

#### Space of linear functions

A simple example of a function space is the two-dimensional space of real linear functions , that is, the functions of form

${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}, \; x \ mapsto ax + b}$

with real numbers and . These are the functions whose graph is a straight line. The set of these functions is a subspace of the space of all real functions, because the sum of two linear functions is again linear, and a multiple of a linear function is also a linear function. ${\ displaystyle a}$${\ displaystyle b}$

For example, the sum of the two linear functions is and with ${\ displaystyle f}$${\ displaystyle g}$

${\ displaystyle f (x) = 2x + 3}$, ,${\ displaystyle g (x) = 3x-5}$

the function with ${\ displaystyle f + g}$

${\ displaystyle (f + g) (x) = f (x) + g (x) = 2x + 3 + 3x-5 = (2 + 3) x + (3-5) = 5x-2}$.

3 times the linear function is the linear function with ${\ displaystyle f}$${\ displaystyle 3f}$

${\ displaystyle (3f) (x) = 3 \ cdot f (x) = 3 \ cdot (2x + 3) = (3 \ cdot 2) x + (3 \ cdot 3) = 6x + 9}$.

### Polynomial spaces

The set of polynomials with coefficients from a body forms, with the usual addition and the usual multiplication with a body element, an infinite-dimensional vector space. The set of monomials is a basis of this vector space. The set of polynomials whose degree is bounded upwards by a forms a subspace of the dimension . For example, the set of all polynomials of degree less than or equal to 4, i.e. all polynomials of the form ${\ displaystyle K [X]}$${\ displaystyle K}$ ${\ displaystyle \ {1, \ x, \ x ^ {2}, \ x ^ {3}, \ x ^ {4}, \ dots \}}$${\ displaystyle n \ in \ mathbb {N}}$${\ displaystyle n + 1}$

${\ displaystyle ax ^ {4} + bx ^ {3} + cx ^ {2} + dx + e}$,

a 5-dimensional vector space with the base . ${\ displaystyle \ {1, \ x, \ x ^ {2}, \ x ^ {3}, \ x ^ {4} \}}$

In the case of infinite bodies , the (abstract) polynomials can be identified with the associated polynomial functions . With this approach, the polynomial spaces correspond to subspaces of the space of all functions from to . For example, the space of all real polynomials of degree corresponds to the space of linear functions. ${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle \ leq 1}$

### Body extensions

If an upper part of the body is , then with its addition and the restricted multiplication as scalar multiplication is a -vector space. The rules to be demonstrated for this result directly from the body axioms for . This observation plays an important role in body theory . ${\ displaystyle L}$${\ displaystyle K}$${\ displaystyle L}$${\ displaystyle K \ times L \ rightarrow L}$${\ displaystyle K}$${\ displaystyle L}$

For example, in this way is a two-dimensional vector space; is a base . Likewise, there is an infinite-dimensional vector space, for which a basis, however, cannot be specified specifically. ${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ {1, \ mathrm {i} \}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {Q}}$

## Linear maps

Linear maps are the mappings between two vector spaces that preserve the structure of the vector space. They are the homomorphisms between vector spaces in the sense of universal algebra . A function between two vector spaces and over the same field is called linear if and only if for all and all${\ displaystyle f \ colon U \ to V}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle a, b \ in U}$${\ displaystyle \ lambda \ in K}$

• ${\ displaystyle f (a + b) = f (a) + f (b)}$
• ${\ displaystyle f (\ lambda a) = \ lambda f (a)}$

are fulfilled. That is, it is compatible with the structures that constitute the vector space: addition and scalar multiplication. Two vector spaces are called isomorphic , if there is a linear mapping between them that bijektiv is thus a reversal function has. This inverse function is then automatically linear as well. Isomorphic vector spaces do not differ in terms of their structure as a vector space. ${\ displaystyle f}$

## Basis of a vector space

For a finite number and one denotes the sum ${\ displaystyle v_ {1}, \ dotsc, v_ {n} \ in V}$${\ displaystyle \ alpha _ {1}, \ dotsc, \ alpha _ {n} \ in K}$

${\ displaystyle s = \ alpha _ {1} v_ {1} + \ dotsb + \ alpha _ {n} v_ {n} = \ sum _ {i = 1} ^ {n} \ alpha _ {i} v_ { i}}$

as a linear combination of the vectors . Thereby itself is again a vector from the vector space . ${\ displaystyle v_ {1}, \ dotsc, v_ {n}}$${\ displaystyle s}$${\ displaystyle V}$

If a subset of , then the set of all linear combinations of vectors from is called the linear envelope of . It is a subspace of , namely the smallest subspace that contains. ${\ displaystyle S}$${\ displaystyle V}$${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle V}$${\ displaystyle S}$

A subset of a vector space is said to be linearly dependent if the zero vector can be expressed in a non-trivial way as a linear combination of vectors . “Non-trivial” means that at least one scalar (a coefficient of linear combination) is different from zero. Otherwise it is called linearly independent . ${\ displaystyle S}$${\ displaystyle V}$${\ displaystyle v_ {1}, \ dotsc, v_ {n} \ in S}$${\ displaystyle S}$

A subset of a vector space is a basis of if is linearly independent and the linear hull of is the whole vector space. ${\ displaystyle B}$${\ displaystyle V}$${\ displaystyle V}$${\ displaystyle B}$${\ displaystyle B}$

Given the axiom of choice , Zorn's lemma can be used to prove that every vector space has a basis (it is free ), whereby this statement in the context of Zermelo Fraenkel is equivalent to the axiom of choice. This has far-reaching consequences for the structure of every vector space: First of all, it can be shown that every two bases of a vector space have the same cardinality , so that the cardinality of any base of a vector space is a unique cardinal number , which is called the dimension of the vector space. Two vector spaces over the same body are isomorphic if and only if they have the same dimension, because due to the equality of two bases of two vector spaces there is a bijection between them. This can be continued into a bijective linear mapping, i.e. an isomorphism of the two vector spaces. It can also be shown that any linear mappings are determined by the images of elements of a base. This enables any linear mappings between finite-dimensional vector spaces to be represented as a matrix. This can be transferred to infinite-dimensional vector spaces, but it must be ensured that each generalized “column” only contains a finite number of non-zero entries, so that each basis vector is mapped onto a linear combination of basis vectors in the target space.

Using the basic concept, the problem of finding a skeleton in the category of all vector spaces over a given body has been reduced to finding a skeleton in the category of sets that is given by the class of cardinal numbers . Every -dimensional vector space can also be understood as the -fold direct sum of the underlying body. The direct sums of a body thus form a skeleton of the category of vector spaces above it. ${\ displaystyle d}$${\ displaystyle d}$

The linear factors of the representation of a vector in the basis vectors are called coordinates of the vector with respect to the basis and are elements of the underlying body. Only when a basis is introduced are each vector assigned its coordinates with respect to the selected basis. This makes computing easier, especially if you can use their assigned "descriptive" coordinate vectors instead of vectors in "abstract" vector spaces.

## Subspace

A sub-vector space (also linear subspace ) is a subset of a vector space, which is itself a vector space over the same body. The vector space operations are inherited on the sub-vector space. If a vector space is over a body , then a subset forms a sub- vector space if and only if the following conditions are met: ${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle U \ subseteq V}$

• ${\ displaystyle U \ neq \ emptyset}$
• For all true${\ displaystyle u, v \ in U}$${\ displaystyle u + v \ in U}$
• For everyone and applies${\ displaystyle v \ in U}$${\ displaystyle \ alpha \ in K}$${\ displaystyle \ alpha \, v \ in U}$

The set must therefore be closed with regard to the vector addition and the scalar multiplication. Each vector space contains two trivial sub-vector spaces, namely on the one hand itself, on the other hand the zero vector space , which consists only of the zero vector . Since it is itself a vector space, this implies in particular the necessary condition that it must contain the zero vector. Each subspace is the image of another vector space under a linear mapping into space and the core of a linear mapping into another vector space. A further vector space, the quotient space or factor space , can be formed from a vector space and a sub-vector space by forming equivalence classes , which is largely related to the property of a subspace to be a kernel, see also homomorphism theorem . ${\ displaystyle U}$ ${\ displaystyle \ {0 \}}$${\ displaystyle U}$${\ displaystyle U}$

Two or more vector spaces can be linked to one another in different ways, so that a new vector space is created.

### Direct sum

The direct sum of two vector spaces over the same body consists of all ordered pairs of vectors, of which the first component comes from the first space and the second component from the second space: ${\ displaystyle V, W}$

${\ displaystyle V \ oplus W = \ left \ {\ left (v, w \ right) \ mid v \ in V, w \ in W \ right \}}$

The vector addition and the scalar multiplication are then defined component-wise on this set of pairs, which in turn creates a vector space. The dimension of is then equal to the sum of the dimensions of and . The elements of are often written as a sum instead of a pair . The direct sum can also be generalized to the sum of finitely many and even infinitely many vector spaces, in which case only finitely many components may be not equal to the zero vector in the latter case. ${\ displaystyle V \ oplus W}$${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle V \ oplus W}$${\ displaystyle (v, w)}$${\ displaystyle v + w}$

### Direct product

The direct product of two vector spaces over the same body consists, like the direct sum, of all ordered pairs of vectors of the form ${\ displaystyle V, W}$

${\ displaystyle V \ times W = \ left \ {\ left (v, w \ right) \ mid v \ in V, w \ in W \ right \}}$.

The vector addition and the scalar multiplication are again defined component-wise and the dimension of is again equal to the sum of the dimensions of and . In the case of the direct product of an infinite number of vector spaces, however, an infinite number of components may also be not equal to the zero vector, which in this case differs from the direct sum. ${\ displaystyle V \ times W}$${\ displaystyle V}$${\ displaystyle W}$

### Tensor product

The tensor product of two vector spaces over the same body is given by ${\ displaystyle V, W}$

${\ displaystyle V \ otimes W}$

written down. The elements of the tensor product space have the bilinear representation

${\ displaystyle \ sum _ {i \ in I} \ sum _ {j \ in J} a_ {ij} (v_ {i} \ otimes w_ {j})}$,

where are scalars, is a basis of , and is a basis of . If or is infinite dimensional, again only a finite number of summands may not be zero. The dimension of is then equal to the product of the dimensions of and . The tensor product can also be generalized to several vector spaces. ${\ displaystyle a_ {ij}}$${\ displaystyle (v_ {i}) _ {i \ in I}}$${\ displaystyle V}$${\ displaystyle (w_ {j}) _ {j \ in J}}$${\ displaystyle W}$${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle V \ otimes W}$${\ displaystyle V}$${\ displaystyle W}$

## Vector spaces with additional structure

In many areas of application in mathematics, such as geometry or analysis , the structure of a vector space is not sufficient, for example vector spaces per se do not allow limit value processes , and one therefore considers vector spaces with certain additionally defined structures that are compatible with the vector space structure in certain senses are. Examples:

Euclidean vector space
A real vector space with a scalar product is (mostly) called a Euclidean vector space . It is a special case of a Prehilbert dream (see there for different nomenclature).
Standardized space
A normalized space is a vector space in which vectors have a length ( norm ). This is a nonnegative real number and satisfies the triangle inequality .
Prehilbert dream
A Prähilbert space is a real or complex vector space on which an inner product ( scalar product or positively definite Hermitian form ) is defined. In such a space you can define terms like length and angle.
Topological vector space
A topological vector space over a topological body is a topological space with a compatible vector space structure, i. i.e., the vector space operations and are continuous .${\ displaystyle K}$ ${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle {+} \ colon V \ times V \ to V}$${\ displaystyle {\ cdot} \; \ colon K \ times V \ to V}$
Unitary vector space
A complex vector space with a positively definite Hermitian form ("scalar product") is (mostly) called a unitary vector space. It is a special case of the Prehilbert dream .

All of these examples are topological vector spaces. In topological vector spaces, the analytical concepts of convergence , uniform convergence and completeness are applicable. A complete normalized vector space is called a Banach space , a complete Prähilbert space is called a Hilbert space .

## Generalizations

• If you take a commutative ring instead of a body , you get a module . Modules are a common generalization of the terms abelian group (for the ring of integers ) and vector space (for fields).${\ displaystyle K}$
• Some authors dispense with the commutative law of multiplication in the definition of fields and also call modules over skew fields vector spaces. If one follows this procedure, then -Left vector spaces and -Right vector spaces must be distinguished if the inclined body is not commutative. The definition of the vector space given above results in a left vector space , since the scalars in the product are on the left. Right vector spaces are defined analogously with the scalar multiplication explained as a mirror image. Many fundamental results also apply completely analogously to vector spaces over oblique fields, for example the existence of a basis.${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle K}$
• Another generalization of vector spaces is vector bundles ; they each consist of a vector space for each point of a topological base space.