# Standard scalar product

The standard real scalar product can be viewed as the product of a row vector with a column vector.

The standard scalar product or canonical scalar product (sometimes also called "Euclidean scalar product") is the scalar product normally used in mathematics on the finite-dimensional real and complex standard vector spaces or . With the help of the standard scalar product, terms such as angle and orthogonality can be generalized from two- and three-dimensional Euclidean space to higher dimensions. Like any scalar product, the standard scalar product is a positively definite symmetric bilinear form ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {C} ^ {n}}$(in the complex case Hermitian sesquilinear form ) and invariant under orthogonal or unitary transformations. The norm derived from the standard scalar product is the Euclidean norm , with the help of which terms such as length and distance can be defined in higher-dimensional vector spaces.

## Standard real scalar product

### definition

The standard scalar product of two real vectors with and is defined as ${\ displaystyle x, y \ in \ mathbb {R} ^ {n}}$${\ displaystyle x = (x_ {1}, x_ {2}, \ dotsc, x_ {n}) ^ {T}}$${\ displaystyle y = (y_ {1}, y_ {2}, \ dotsc, y_ {n}) ^ {T}}$

${\ displaystyle \ langle x, y \ rangle: = x_ {1} y_ {1} + x_ {2} y_ {2} + \ dotsb + x_ {n} y_ {n} = \ sum _ {i = 1} ^ {n} x_ {i} y_ {i} = x ^ {T} y}$,

where denotes the transposed vector to and the result is a real number. The real standard scalar product is thus calculated by multiplying the respective corresponding vector components and by summing over all these products. Alternatively, instead of using angle brackets , the standard scalar product is also noted with or . ${\ displaystyle x ^ {T}}$${\ displaystyle x}$${\ displaystyle x \ cdot y}$${\ displaystyle x \ circ y}$

### example

The standard scalar product of the two real vectors and in three-dimensional space is ${\ displaystyle x = (1,2, {-} 3) ^ {T}}$${\ displaystyle y = (5, {-} 4,1) ^ {T}}$

${\ displaystyle \ langle x, y \ rangle = 1 \ cdot 5 + 2 \ cdot (-4) + (- 3) \ cdot 1 = 5-8-3 = {-} 6}$.

### Dot product axioms

The standard real scalar product naturally satisfies the axioms of a real scalar product . It is bilinear , i.e. linear in both the first argument and there

${\ displaystyle \ langle \ lambda x, y \ rangle = (\ lambda x_ {1}) y_ {1} + \ dotsb + (\ lambda x_ {n}) y_ {n} = \ lambda (x_ {1} y_ {1} + \ dotsb + x_ {n} y_ {n}) = \ lambda \ langle x, y \ rangle}$   and
${\ displaystyle \ langle x + y, z \ rangle = (x_ {1} + y_ {1}) z_ {1} + \ dotsb + (x_ {n} + y_ {n}) z_ {n} = (x_ {1} z_ {1} + \ dotsb + x_ {n} z_ {n}) + (y_ {1} z_ {1} + \ dotsb + y_ {n} z_ {n}) = \ langle x, z \ rangle + \ langle y, z \ rangle}$,

as well as in the second argument, there

${\ displaystyle \ langle x, \ lambda y \ rangle = x_ {1} (\ lambda y_ {1}) + \ dotsb + x_ {n} (\ lambda y_ {n}) = \ lambda (x_ {1} y_ {1} + \ dotsb + x_ {n} y_ {n}) = \ lambda \ langle x, y \ rangle}$   and
${\ displaystyle \ langle x, y + z \ rangle = x_ {1} (y_ {1} + z_ {1}) + \ dotsb + x_ {n} (y_ {n} + z_ {n}) = (x_ {1} y_ {1} + \ dotsb + x_ {n} y_ {n}) + (x_ {1} z_ {1} + \ dotsb + x_ {n} z_ {n}) = \ langle x, y \ rangle + \ langle x, z \ rangle}$.

Next, it's symmetrical , there

${\ displaystyle \ langle x, y \ rangle = x_ {1} y_ {1} + \ dotsb + x_ {n} y_ {n} = y_ {1} x_ {1} + \ dotsb + y_ {n} x_ { n} = \ langle y, x \ rangle}$,

and positive definitely due to

${\ displaystyle \ langle x, x \ rangle = x_ {1} ^ {2} + \ dotsb + x_ {n} ^ {2} \ geq 0}$ and
${\ displaystyle \ langle x, x \ rangle = 0 \ Leftrightarrow x_ {1} ^ {2} + \ dotsb + x_ {n} ^ {2} = 0 \ Leftrightarrow x_ {1} ^ {2} = \ dotsb = x_ {n} ^ {2} = 0 \ Leftrightarrow x = 0}$.

## Complex standard scalar product

### definition

The standard scalar product of two complex vectors can be defined in two ways, either by ${\ displaystyle x, y \ in \ mathbb {C} ^ {n}}$

${\ displaystyle \ langle x, y \ rangle: = {\ bar {x}} _ {1} y_ {1} + {\ bar {x}} _ {2} y_ {2} + \ dotsb + {\ bar {x}} _ {n} y_ {n} = \ sum _ {i = 1} ^ {n} {\ bar {x}} _ {i} y_ {i} = x ^ {H} y}$

or through

${\ displaystyle \ langle x, y \ rangle: = x_ {1} {\ bar {y}} _ {1} + x_ {2} {\ bar {y}} _ {2} + \ dotsb + x_ {n } {\ bar {y}} _ {n} = \ sum _ {i = 1} ^ {n} x_ {i} {\ bar {y}} _ {i} = y ^ {H} x}$.

The overline denotes the complex conjugation and the adjoint vector zu . The complex standard scalar product is calculated by multiplying the corresponding vector components, whereby one of the two components is always conjugated, and by summing over all these products. In both variants the result is a complex number and due to these two numbers differ only in terms of complex conjugation. ${\ displaystyle x ^ {H}}$${\ displaystyle x}$${\ displaystyle x ^ {H} y = (y ^ {H} x) ^ {H}}$

### example

The standard scalar product of the two vectors and in two-dimensional complex space is in the first variant ${\ displaystyle x = (i, 2-i) ^ {T}}$${\ displaystyle y = (i-1,2) ^ {T}}$

${\ displaystyle \ langle x, y \ rangle = {\ bar {i}} \ cdot (i-1) + {\ overline {(2-i)}} \ cdot 2 = -i \ cdot (i-1) + (2 + i) \ cdot 2 = (1 + i) + (4 + 2i) = 5 + 3i}$

and in the second variant

${\ displaystyle \ langle x, y \ rangle = i \ cdot {\ overline {(i-1)}} + (2-i) \ cdot {\ bar {2}} = i \ cdot (-i-1) + (2-i) \ cdot 2 = (1-i) + (4-2i) = 5-3i}$.

Both variants lead to the same result except for complex conjugation.

### Dot product axioms

The following axioms of a complex scalar product are listed for the first variant, for the second variant they apply analogously by interchanging the conjugation. The standard complex scalar product is sesquilinear , i.e. semilinear in the first argument, da

${\ displaystyle \ langle \ lambda x, y \ rangle = (\ lambda x) ^ {H} y = {\ bar {\ lambda}} (x ^ {H} y) = {\ bar {\ lambda}} \ langle x, y \ rangle}$   and
${\ displaystyle \ langle x + y, z \ rangle = (x + y) ^ {H} z = x ^ {H} z + y ^ {H} z = \ langle x, z \ rangle + \ langle y, z \ rangle}$,

as well as linear in the second argument, there

${\ displaystyle \ langle x, \ lambda y \ rangle = x ^ {H} (\ lambda y) = \ lambda (x ^ {H} y) = \ lambda \ langle x, y \ rangle}$   and
${\ displaystyle \ langle x, y + z \ rangle = x ^ {H} (y + z) = x ^ {H} y + x ^ {H} z = \ langle x, y \ rangle + \ langle x, z \ rangle}$.

Next it is Hermitian , there

${\ displaystyle \ langle x, y \ rangle = x ^ {H} y = (y ^ {H} x) ^ {H} = {\ overline {\ langle y, x \ rangle}}}$,

and positive definitely due to

${\ displaystyle \ langle x, x \ rangle = x ^ {H} x = | x_ {1} | ^ {2} + \ dotsb + | x_ {n} | ^ {2} \ geq 0}$ and
${\ displaystyle \ langle x, x \ rangle = 0 \ Leftrightarrow x ^ {H} x = 0 \ Leftrightarrow | x_ {1} | ^ {2} + \ dotsb + | x_ {n} | ^ {2} = 0 \ Leftrightarrow | x_ {1} | ^ {2} = \ dotsb = | x_ {n} | ^ {2} = 0 \ Leftrightarrow x_ {1} = \ dotsb = x_ {n} = 0 \ Leftrightarrow x = 0}$,

where is the magnitude of a complex number. In the second variant, the standard scalar product is linear in the first and semilinear in the second argument. The real case is obtained from the complex case by omitting the conjugation and the amounts and by replacing the adjoint with the transposition. ${\ displaystyle | \ cdot |}$

## properties

### Cauchy-Schwarz inequality

The standard scalar met like every dot the Cauchy-Schwarz inequality, that is for all with or applies ${\ displaystyle x, y \ in {\ mathbb {K}} ^ {n}}$${\ displaystyle {\ mathbb {K}} = \ mathbb {R}}$${\ displaystyle {\ mathbb {K}} = \ mathbb {C}}$

${\ displaystyle \ left | \ langle x, y \ rangle \ right | ^ {2} \ leq \ langle x, x \ rangle \ cdot \ langle y, y \ rangle}$.

In the real case, the amount bars on the left can be omitted. The Cauchy-Schwarz inequality is one of the central inequalities of linear algebra and analysis . For example, it follows from the Cauchy-Schwarz inequality that the standard scalar product is a continuous function . ${\ displaystyle \ langle \ cdot, \ cdot \ rangle \ colon {\ mathbb {K} ^ {n}} \ times {\ mathbb {K} ^ {n}} \ rightarrow {\ mathbb {K}}}$

### Displacement property

The standard scalar product has the following displacement property for all matrices and all vectors : ${\ displaystyle A \ in \ mathbb {R} ^ {m \ times n}}$${\ displaystyle x \ in \ mathbb {R} ^ {n}, y \ in \ mathbb {R} ^ {m}}$

${\ displaystyle \ langle Ax, y \ rangle = (Ax) ^ {T} y = x ^ {T} A ^ {T} y = x ^ {T} (A ^ {T} y) = \ langle x, A ^ {T} y \ rangle}$,

where is the transposed matrix of . The same applies to the complex standard scalar product for all matrices and all vectors${\ displaystyle A ^ {T}}$${\ displaystyle A}$${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$${\ displaystyle x \ in \ mathbb {C} ^ {n}, y \ in \ mathbb {C} ^ {m}}$

${\ displaystyle \ langle Ax, y \ rangle = (Ax) ^ {H} y = x ^ {H} A ^ {H} y = x ^ {H} (A ^ {H} y) = \ langle x, A ^ {H} y \ rangle}$,

where is the adjoint matrix of . ${\ displaystyle A ^ {H}}$${\ displaystyle A}$

### Unitary invariance

The real standard scalar product does not change under orthogonal transformations , that is, for an orthogonal matrix with the displacement property ${\ displaystyle A \ in {\ mathbb {R}} ^ {n \ times n}}$

${\ displaystyle \ langle Ax, Ay \ rangle = \ langle x, A ^ {T} Ay \ rangle = \ langle x, A ^ {- 1} Ay \ rangle = \ langle x, Iy \ rangle = \ langle x, y \ rangle}$,

wherein the inverse matrix and the unit matrix of the size is. Such transformations are typically rotations around the zero point or reflections on a plane through the zero point. Similarly, the complex standard scalar product is invariant under unitary transformations , that is, for a unitary matrix, the same applies ${\ displaystyle A ^ {- 1}}$${\ displaystyle I}$${\ displaystyle n \ times n}$ ${\ displaystyle A \ in {\ mathbb {C}} ^ {n \ times n}}$

${\ displaystyle \ langle Ax, Ay \ rangle = \ langle x, A ^ {H} Ay \ rangle = \ langle x, A ^ {- 1} Ay \ rangle = \ langle x, Iy \ rangle = \ langle x, y \ rangle}$.

## Derived terms

### angle

The angle between two vectors is given by the real standard scalar product${\ displaystyle \ varphi}$${\ displaystyle x, y \ in {\ mathbb {R}} ^ {n} \ setminus \ {0 \}}$

${\ displaystyle \ cos (\ varphi) = {\ frac {\ langle x, y \ rangle} {{\ sqrt {\ langle x, x \ rangle}} \, {\ sqrt {\ langle y, y \ rangle} }}} = {\ frac {x_ {1} y_ {1} + \ dotsb + x_ {n} y_ {n}} {{\ sqrt {x_ {1} ^ {2} + \ dotsb + x_ {n} ^ {2}}} \, {\ sqrt {y_ {1} ^ {2} + \ dotsb + y_ {n} ^ {2}}}}}}$

Are defined. Due to the Cauchy-Schwarz inequality, the denominator of this fraction is at least as large as the value of the numerator and thus the angle is in the interval , i.e. between and . If the two vectors and unit vectors , the corresponding cosine just the area enclosed by the two vectors their standard scalar angle. There are a number of different definitions for angles between complex vectors. ${\ displaystyle \ varphi}$${\ displaystyle [0, \ pi]}$${\ displaystyle 0 ^ {\ circ}}$${\ displaystyle 180 ^ {\ circ}}$${\ displaystyle x}$${\ displaystyle y}$

### Orthogonality

In both the real and the complex case, two vectors are called orthogonal (right-angled) if their standard scalar product

${\ displaystyle \ langle x, y \ rangle = 0}$

is. This corresponds to the real case then just a right angle from between the two vectors, provided that not equal to the zero vector are. ${\ displaystyle \ varphi = \ arccos 0 = {\ tfrac {\ pi} {2}} = 90 ^ {\ circ}}$

If one considers a straight line through the origin , plane of origin or in general a -dimensional subspace of -dimensional real or complex space and is an orthonormal basis of , then is ${\ displaystyle k}$ ${\ displaystyle U}$${\ displaystyle n}$${\ displaystyle \ {u_ {1}, \ dotsc, u_ {k} \}}$${\ displaystyle U}$

${\ displaystyle y = \ langle x, u_ {1} \ rangle u_ {1} + \ dotsb + \ langle x, u_ {k} \ rangle u_ {k} \ in U}$

the orthogonal projection of a vector of the output space onto this subspace. The difference vector lies in the orthogonal complement of , i.e. it is perpendicular to all vectors of the subspace, that is, it applies to all vectors . ${\ displaystyle x}$${\ displaystyle xy}$${\ displaystyle U}$${\ displaystyle \ langle xy, u \ rangle = 0}$${\ displaystyle u \ in U}$

### standard

The ( induced ) norm of a real or complex vector derived from the standard scalar product

${\ displaystyle \ | x \ | = {\ sqrt {\ langle x, x \ rangle}} = {\ sqrt {| x_ {1} | ^ {2} + | x_ {2} | ^ {2} + \ dotsb + | x_ {n} | ^ {2}}}}$

is called the Euclidean norm. This definition is well defined because the scalar product of a vector with itself is real and nonnegative. In the real case, the amount bars can also be omitted. The length of a vector can be determined with the Euclidean norm .

### Metric

The Euclidean distance between two vectors is in turn derived from the Euclidean norm

${\ displaystyle d (x, y) = \ | xy \ | = {\ sqrt {\ langle xy, xy \ rangle}} = {\ sqrt {| x_ {1} -y_ {1} | ^ {2} + \ dotsb + | x_ {n} -y_ {n} | ^ {2}}}}$

derived. In the real case, the amount bars can also be omitted here. With this concept of distance, one receives a metric and from this metric a topology , the standard topology on the or . ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {C} ^ {n}}$

## Generalizations

### Finite-dimensional vector spaces

The previous considerations can be transferred from the standard spaces or also to general real or complex vector spaces of finite dimensions . Is an orthonormal basis of with respect to an (arbitrary) scalar product , then every vector has the component representation ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle V}$${\ displaystyle n}$${\ displaystyle \ {e_ {1}, \ dotsc, e_ {n} \}}$${\ displaystyle V}$${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle v \ in V}$

${\ displaystyle v = \ sum _ {i = 1} ^ {n} v_ {i} e_ {i}}$with   for ,${\ displaystyle v_ {i} = \ langle v, e_ {i} \ rangle}$${\ displaystyle i = 1, \ dotsc, n}$

where the components of the vector on this basis and the factors are the coordinates of the vector. The coordinates are the lengths of the orthogonal projections of the vector onto the respective base vectors. The scalar product of two vectors can then be calculated via the standard scalar product of the coordinate vectors ${\ displaystyle v_ {i} e_ {i}}$${\ displaystyle v_ {i} \ in {\ mathbb {K}}}$${\ displaystyle v, w \ in V}$

${\ displaystyle \ langle v, w \ rangle = \ left \ langle \ sum _ {i = 1} ^ {n} v_ {i} e_ {i}, \ sum _ {j = 1} ^ {n} w_ { j} e_ {j} \ right \ rangle = \ sum _ {i = 1} ^ {n} \ sum _ {j = 1} ^ {n} {\ bar {v}} _ {i} w_ {j} \ langle e_ {i}, e_ {j} \ rangle = \ sum _ {i = 1} ^ {n} {\ bar {v}} _ {i} w_ {i}}$

can be calculated, whereby corresponding representations also apply in the other complex variant and in the real case. If one interprets real or complex matrices as correspondingly long (column) vectors, then the standard scalar product of such vectors corresponds precisely to the Frobenius scalar product of the associated matrices.

### Sequence rooms

The standard scalar product can also be generalized to sequences and thus to infinite-dimensional vector spaces. However, the underlying sequence space must be restricted so that the scalar product remains finite. To do this, one considers the space of real or complex-valued sequences for which ${\ displaystyle \ ell ^ {2}}$${\ displaystyle (a_ {i}) _ {i \ in \ mathbb {N}} = (a_ {1}, a_ {2}, \ dotsc) \ in {\ mathbb {K}} ^ {\ mathbb {N }}}$

${\ displaystyle \ sum _ {i = 1} ^ {\ infty} | a_ {i} | ^ {2} <\ infty}$

applies. The -Scalar product of two such quadratically summable sequences is then through ${\ displaystyle \ ell ^ {2}}$${\ displaystyle (a_ {i}) _ {i \ in \ mathbb {N}}, (b_ {i}) _ {i \ in \ mathbb {N}} \ in \ ell ^ {2}}$

${\ displaystyle \ left \ langle (a_ {i}), (b_ {i}) \ right \ rangle _ {\ ell ^ {2}} = \ sum _ {i = 1} ^ {\ infty} {\ bar {a}} _ {i} b_ {i}}$

Are defined. More generally one can choose any index set instead of the natural numbers and then consider the space of the quadratically in summable sequences with the scalar product ${\ displaystyle I}$${\ displaystyle \ ell ^ {2} (I)}$${\ displaystyle I}$

${\ displaystyle \ left \ langle (a_ {i}), (b_ {i}) \ right \ rangle _ {\ ell ^ {2} (I)} = \ sum _ {i \ in I} {\ bar { a}} _ {i} b_ {i}}$.

In both cases the real case is obtained by omitting the conjugation and the other complex variant is obtained by shifting the conjugation to the second component.