# Dot product norm

A Skalarproduktnorm , inner product standard or Hilbert norm is in the mathematics one of a scalar product induced (derived) standard . In a finite-dimensional real or complex vector space with the standard scalar product, the scalar product norm corresponds precisely to the Euclidean norm . In general, every Prähilbert space has an assigned scalar product norm and is a normalized space with this norm . A norm is induced by a scalar product if and only if it satisfies the parallelogram equation. Every dot product norm still satisfies the Cauchy-Schwarz inequality and is invariant under unitary transformations .

## definition

Relationships between scalar product, norm and metric

If there is a vector space over the field of real or complex numbers and an inner product on , then is an inner product space . The induced by this inner product standard is for a vector then defined as ${\ displaystyle V}$ ${\ displaystyle {\ mathbb {K}}}$${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle V \ times V}$${\ displaystyle (V, \ langle \ cdot, \ cdot \ rangle)}$${\ displaystyle v \ in V}$

${\ displaystyle \ | v \ |: = {\ sqrt {\ langle v, v \ rangle}}}$,

thus the root of the scalar product of the vector with itself. This definition is well defined, since the scalar product of a vector with itself is real and nonnegative.

This norm is also called scalar product norm, inner product norm or Hilbert norm and is sometimes referred to as (general) Euclidean norm in real scalar product spaces . With the scalar product norm, the vector space is a normalized space . Furthermore, the metric induced by the standard is a metric space and the standard topology is a topological space . ${\ displaystyle V}$ ${\ displaystyle (V, \ | \ cdot \ |)}$${\ displaystyle V}$${\ displaystyle d}$ ${\ displaystyle (V, d)}$ ${\ displaystyle {\ mathcal {T}}}$ ${\ displaystyle (V, {\ mathcal {T}})}$

## Examples

Important examples of scalar product norms are:

## properties

### Norm axioms

Vectors in the triangle inequality

Every scalar product norm fulfills the three norm axioms definiteness , absolute homogeneity and subadditivity . The definiteness follows from the uniqueness of the root of the root function above ${\ displaystyle v \ in V}$

${\ displaystyle \ | v \ | = 0 \; \ Leftrightarrow \; {\ sqrt {\ langle v, v \ rangle}} = 0 \; \ Rightarrow \; \ langle v, v \ rangle = 0 \; \ Leftrightarrow \; v = 0}$,

absolute homogeneity follows for and from ${\ displaystyle v \ in V}$${\ displaystyle \ alpha \ in \ mathbb {K}}$

${\ displaystyle \ | \ alpha v \ | ^ {2} = \ langle \ alpha v, \ alpha v \ rangle = {\ bar {\ alpha}} \ alpha \ langle v, v \ rangle = | \ alpha | ^ {2} \ | v \ | ^ {2}}$

and the subadditivity (or triangle inequality ) follows for from the Cauchy-Schwarz inequality (see the following section) ${\ displaystyle v, w \ in V}$

{\ displaystyle {\ begin {aligned} \ | v + w \ | ^ {2} & = \ langle v + w, v + w \ rangle = \ langle v, v \ rangle + \ langle v, w \ rangle + \ langle w, v \ rangle + \ langle w, w \ rangle = \ | v \ | ^ {2} + \ langle v, w \ rangle + {\ overline {\ langle v, w \ rangle}} + \ | w \ | ^ {2} \\ & = \ | v \ | ^ {2} +2 \ operatorname {Re} \ langle v, w \ rangle + \ | w \ | ^ {2} \ leq \ | v \ | ^ {2} +2 \, \ | v \ | \, \ | w \ | + \ | w \ | ^ {2} = \ left (\ | v \ | + \ | w \ | \ right) ^ {2} \ ,, \ end {aligned}}}

where indicates the real part of the complex number and in the last two cases the (positive) root has to be drawn on both sides. ${\ displaystyle \ operatorname {Re}}$

### Parallelogram equation

Vectors in the parallelogram equation

The parallelogram equation also applies to a scalar product norm

${\ displaystyle \ | v + w \ | ^ {2} + \ | vw \ | ^ {2} = 2 (\ | v \ | ^ {2} + \ | w \ | ^ {2})}$

for all vectors . Conversely, according to Jordan-von Neumann's theorem, if a norm fulfills the parallelogram equation , it is induced by a scalar product. This result is obtained by a polarization formula , for real vector spaces for example by ${\ displaystyle v, w \ in V}$${\ displaystyle \ | \ cdot \ |}$

${\ displaystyle \ langle v, w \ rangle = {\ frac {1} {4}} (\ | v + w \ | ^ {2} - \ | vw \ | ^ {2})}$.

### Unitary invariance

A dot product norm is still invariant under unitary transformations . If there is a unitary operator (in the finite-dimensional case a unitary or orthogonal matrix ) from into a further scalar product space with an associated norm, then applies ${\ displaystyle U \ colon V \ rightarrow W}$${\ displaystyle V}$${\ displaystyle W}$

${\ displaystyle \ | Uv \ | = \ | v \ |}$,

what immediately

${\ displaystyle \ | Uv \ | ^ {2} = \ langle Uv, Uv \ rangle = \ langle U ^ {\ ast} Uv, v \ rangle = \ langle v, v \ rangle = \ | v \ | ^ { 2}}$

follows, where the operator to be adjoint (in the finite-dimensional case the adjoint or transposed matrix ) is. A scalar product norm does not change its value under unitary transformations of the vector. In the real, finite-dimensional case, such transformations are, for example, rotations of the vector around the zero point . ${\ displaystyle U ^ {\ ast}}$${\ displaystyle U}$

### Cauchy-Schwarz inequality

A scalar product norm satisfies the Cauchy-Schwarz inequality for all vectors${\ displaystyle v, w \ in V}$

${\ displaystyle \ left | \ langle v, w \ rangle \ right | \ leq \ | v \ | \, \ | w \ |}$,

where equality applies if and only if and are linearly dependent . In the real case, the amount bars can also be omitted. The Cauchy-Schwarz inequality then immediately follows ${\ displaystyle v}$${\ displaystyle w}$

${\ displaystyle {\ frac {\ langle v, w \ rangle} {\ | v \ | \, \ | w \ |}} \ leq 1}$,

therefore we can use the angle between two real vectors ${\ displaystyle \ varphi}$

${\ displaystyle \ cos (\ varphi) = {\ frac {\ langle v, w \ rangle} {\ | v \ | \, \ | w \ |}}}$

define. The angle is thus in the interval , i.e. between and . There are a number of different definitions for angles between complex vectors. ${\ displaystyle \ varphi}$${\ displaystyle [0, \ pi]}$${\ displaystyle 0 ^ {\ circ}}$${\ displaystyle 180 ^ {\ circ}}$

### Pythagorean theorem

In general, two vectors are said to be orthogonal if their is a scalar product . The Pythagorean theorem for scalar product spaces then applies to orthogonal vectors ${\ displaystyle v, w \ in V}$ ${\ displaystyle \ langle v, w \ rangle = 0}$

${\ displaystyle \ | v + w \ | ^ {2} = \ | v \ | ^ {2} + \ | w \ | ^ {2}}$,

which follows directly from the first part of the above derivation of the triangle inequality. The Pythagorean theorem can also be extended to a finite sum of pairwise orthogonal vectors and it then applies ${\ displaystyle v_ {1}, \ ldots, v_ {n} \ in V}$

${\ displaystyle \ | v_ {1} + \ dotsb + v_ {n} \ | ^ {2} = \ | v_ {1} \ | ^ {2} + \ dotsb + \ | v_ {n} \ | ^ { 2}}$.

The corresponding extension to an infinite number of summands in a Hilbert space is Parseval's equation .

## generalization

If one waives the positive definiteness of the scalar product, one obtains the following generalization. Every positive semidefinite Hermitian sesquilinear form (in the real case symmetrical bilinear form ) induces for by ${\ displaystyle (\ cdot, \ cdot) \ colon V \ times V \ rightarrow {\ mathbb {K}}}$${\ displaystyle v \ in V}$

${\ displaystyle p (v) = {\ sqrt {(v, v)}}}$

a semi-norm . With this semi-standard there is then a semi- standardized space , which is generally not a metric space. By forming residual classes , however, an associated standard can be derived from a semi-standard and thus a standardized space and thus also a metric and a topological space are obtained. ${\ displaystyle (V, p)}$

### example

The covariance is a bilinear form on the space of the random variables with finite second moments , and becomes a scalar product on the quotient space of the random variables, which only differ by one constant. The norm induced by this scalar product is then simply the standard deviation of a random variable.

## Individual evidence

1. Kosmol: Optimization and Approximation . de Gruyter, 2010, p. 100 .
2. Heuser: Functional Analysis: Theory and Application . 2006, p. 148 .
3. Amann, Escher: Analysis I . 2006, p. 168 .
4. ^ Bronstein et al .: Taschenbuch der Mathematik . 2008, p. 368 .
5. Beutelspacher: Linear Algebra . 2003, p. 259 .
6. ^ Klaus Scharnhorst: Angles in complex vector spaces . In: Acta Applicandae Math. Volume 69 , 2001, p. 95-103 .