# Hilbert space base

As a Hilbert space base in which is functional analysis a base of a Hilbert space , respectively. A Hilbert space is a (often infinitely dimensional) vector space that is equipped with a scalar product and is complete with the induced norm .

The natural basic concept of a Hilbert space is the generalization of the orthonormal basis of Euclidean geometry, the complete orthonormal system or the Hilbert basis . Sometimes, e.g. B. in wavelet theory , one works with generating systems of a Hilbert space, of which the orthogonality is difficult or impossible to prove.

This article deals primarily with those Hilbert space bases that are not orthonormal systems, i.e. not Hilbert bases.

In the finite-dimensional case, the alternative to an orthonormal basis is a general, non-orthogonal basis. For every basis in the finite-dimensional, the two characteristic properties coincide: A basis is a maximally linearly independent system and at the same time a minimal generating system.

In the infinite-dimensional case, the “stable” deviation from the concept of the Hilbert basis is not so easy. Apart from special cases, one demands from a basis that every vector of the Hilbert space has uniquely determined coordinates, which change continuously with the vector, and that every vector is uniquely determined by its coordinates, even more that it has permissible coordinates for every system gives a vector continuously dependent on these coordinates. In other words, there should be a bijective linear mapping of the Hilbert space into a coordinate space that is continuous in both directions.

## Motivation in the Euclidean case

In a - dimensional - vector space , a base is characterized in particular by the fact that a bijective mapping between the vector space and the model vector space can be generated for it: ${\ displaystyle n}$${\ displaystyle K}$ ${\ displaystyle V}$${\ displaystyle (b_ {1}, b_ {2}, \ dots, b_ {n})}$${\ displaystyle V}$${\ displaystyle K ^ {n}}$

${\ displaystyle E \ colon K ^ {n} \ to V, \ qquad {\ text {with}} \ quad x = (x ^ {1}, \ dots, x ^ {n}) ^ {t} \ mapsto E \ cdot x = b_ {1} x ^ {1} + b_ {2} x ^ {2} + \ dots + b_ {n} x ^ {n}}$.

This mapping in turn codes the basis, because the images of the canonical basis vectors of the column vector space are precisely the selected basis vectors of . The inverse mapping to this assigns each vector from its coordinate vector with respect to this base. ${\ displaystyle (e_ {1}, e_ {2}, \ dots, e_ {n})}$${\ displaystyle K ^ {n}}$${\ displaystyle V}$${\ displaystyle V}$

In this sense one can identify bijective mappings from to with bases from . If a norm is defined, it follows from the bijectivity that the coordinates of unit vectors can neither be very small nor very large. ${\ displaystyle K ^ {n}}$${\ displaystyle V}$${\ displaystyle V}$${\ displaystyle V}$

## Systems of vectors and their properties

Be a Hilbert space over the body or . Let further be a (finite, countable or even uncountable) subset of the Hilbert space. In order to linguistically differentiate this subset from sub-vector spaces , X is called a system of vectors . ${\ displaystyle {\ mathcal {H}}}$${\ displaystyle \ mathbb {K} = \ mathbb {R}}$${\ displaystyle \ mathbb {K} = \ mathbb {C}}$${\ displaystyle X \ subset {\ mathcal {H}}}$

### Coefficient space

For any finite number of vectors from X one can build linear combinations without restriction. The coefficients of such a linear combination can be summarized in a function that only differs from zero in a finite number of places. The linear combination then has the shape ${\ displaystyle c \ colon X \ to \ mathbb {K}}$

${\ displaystyle \ sum _ {x \ in X} c (x) \; x, \ quad {\ text {where}} \ quad \ # \ {x: \; c (x) \ neq 0 \} <\ infty}$.

On the space of these functions with a finite carrier, a scalar product can be defined as ${\ displaystyle \ ell ^ {\ mathrm {fin}} (X)}$

${\ displaystyle \ langle c, \, d \ rangle = \ sum _ {x \ in X} c (x) {\ overline {d (x)}}}$.

Only finitely many terms differ from zero, ie the sum is defined as such.

Each dot product also defines a norm and thus a metric. Let it be the completion of the space with regard to this topology. will serve as coefficient space and later as coordinate space. If X is finite, then this coefficient space is isomorphic to a Euclidean space, for X the coefficient space is isometrically isomorphic to the sequence space . ${\ displaystyle \ ell ^ {2} (X)}$${\ displaystyle \ ell ^ {fin} (X)}$${\ displaystyle \ ell ^ {2} (X)}$ ${\ displaystyle \ ell ^ {2} (\ mathbb {N}, \ mathbb {K})}$

For the sake of simplicity, elements from are called coefficient vectors, the component of c to the “index” x is the value c (x) . A coefficient vector c is said to be finite if the carrier of c is finite. ${\ displaystyle \ ell ^ {2} (X)}$

### Linear combinations

The simplest requirement is that there should also be a linear combination of the system X for every coefficient vector . In general, however, the "sum" ${\ displaystyle c \ in \ ell ^ {2} (X)}$

${\ displaystyle \ sum _ {x \ in X} c (x) \; x}$

not defined. But for each there are coefficient vectors with finite carriers and a distance , for which this linear combination is defined. The question now is when these finite linear combinations have a common limit for . ${\ displaystyle \ varepsilon> 0}$${\ displaystyle {\ tilde {c}} \ in \ ell ^ {2} (X)}$${\ displaystyle \ | c - {\ tilde {c}} \ | _ {\ ell ^ {2} (X)} <\ varepsilon}$${\ displaystyle \ varepsilon \ to 0}$

#### Definition (Bessel system)

X is called the Bessel system if the mapping with is continuous, ie if there is a constant B with ${\ displaystyle {\ mathcal {E}} \ colon \ ell ^ {2} (X) \ to {\ mathcal {H}}}$${\ displaystyle \ textstyle {\ mathcal {E}} (c) = \ sum _ {x \ in X} c (x) \ cdot x}$

${\ displaystyle \ | \ sum _ {x \ in X} c (x) \, x \ | _ {\ mathcal {H}} \ leq {\ sqrt {B}} \, \ | c \ | _ {\ ell ^ {2} (X)}}$.

Note: This inequality only has to be fulfilled for finite coefficient sequences or functions with a finite carrier in order to be valid for all coefficient sequences or functions.

Under these circumstances, the image vectors of a series of finite approximations of a coefficient vector form a Cauchy series in Hilbert space . This sequence therefore has a limit value, and this is independent of the selected approximating sequence. ${\ displaystyle {\ bigl (} {\ mathcal {E}} (c_ {n}) {\ bigr)} _ {n \ in \ mathbb {N}}}$${\ displaystyle c_ {n} \ to c}$${\ displaystyle c \ in \ ell ^ {2} (X)}$${\ displaystyle {\ mathcal {H}}}$

Since there is a linear operator between two Hilbert spaces, there is an adjoint operator . After the definition of an adjoint operator, this is determined . If X is a Bessel system, the adjoint operator satisfies a Bessel inequality : With the constant B> 0 , the inequality applies to any vectors${\ displaystyle {\ mathcal {E}}}$ ${\ displaystyle {\ mathcal {E}} ^ {*} \ colon {\ mathcal {H}} \ to \ ell _ {2} (X)}$${\ displaystyle {\ mathcal {E}} ^ {*} (v) \ colon X \ to \ mathbb {K} \ colon \; x \ mapsto \ langle x, v \ rangle}$${\ displaystyle v \ in {\ mathcal {H}}}$

${\ displaystyle \ | {\ mathcal {E}} ^ {*} (v) \ | ^ {2} = \ sum _ {x \ in X} | \ langle x, v \ rangle | ^ {2} \ leq B \, \ | v \ | ^ {2}}$.

### Linear independence

In many cases it is not sufficient to define that no nontrivial linear combination of X is the zero vector . Despite this property, it can be the case that there are arbitrarily small linear combinations in which the coefficient vector has the length 1. It is therefore more stringent to require that X is a Bessel system and that there is a lower bound A> 0 such that

${\ displaystyle \ | {\ mathcal {E}} (c) \ | _ {\ mathcal {H}} \ geq {\ sqrt {A}} \ | c \ | _ {\ ell ^ {2} (X) }}$

holds for all coefficient vectors. ${\ displaystyle c \ in \ ell ^ {2} (X)}$

#### Definition (Riesz system)

A system X of vectors of a Hilbert space is called a Riesz system if there are finite constants , so that for finite coefficient vectors and thus for all coefficient vectors the inequalities ${\ displaystyle 0 ${\ displaystyle c \ in \ ell ^ {2} (X)}$

${\ displaystyle {\ sqrt {A}} \ | c \ | _ {\ ell ^ {2} (X)} \ leq \ | \ sum _ {x \ in X} c (x) \, x \ | _ {\ mathcal {H}} \ leq {\ sqrt {B}} \, \ | c \ | _ {\ ell ^ {2} (X)}}$

are fulfilled.

### Generating system

A generating system X in Hilbert space can be characterized in that the orthogonal complement of X only consists of the zero vector. If X is also a Bessel system, the scalar products form the components of the vector . That means, every vector with must be the zero vector. ${\ displaystyle \ langle v, \, x \ rangle}$${\ displaystyle {\ mathcal {E}} ^ {*} (v)}$${\ displaystyle v \ in {\ mathcal {H}}}$${\ displaystyle {\ mathcal {E}} ^ {*} (v) = 0}$

Again, this characterization is not sufficient in many cases, since it would be possible for arbitrarily small values ​​to be assumed on the unit sphere. To prevent this, one demands the existence of a lower bound A> 0 for the values ​​on the unit sphere, assuming the inequality for all with${\ displaystyle {\ mathcal {E}} ^ {*} (v)}$${\ displaystyle v \ in {\ mathcal {H}}}$${\ displaystyle \ | v \ | _ {\ mathcal {H}} = 1}$

${\ displaystyle \ | {\ mathcal {E}} ^ {*} (v) \ | _ {\ ell ^ {2} (X)} ^ {2} = \ sum _ {x \ in X} | \ langle v, \, x \ rangle | ^ {2} \ geq A}$

Fulfills.

#### Definition (frame)

A system X of vectors in a Hilbert space is called a frame , if there are finite constants , the frame constants , such that for each vector the inequalities ${\ displaystyle 0 ${\ displaystyle v \ in {\ mathcal {H}}}$

${\ displaystyle {\ sqrt {A}} \ | v \ | _ {\ mathcal {H}} \ leq \ | {\ mathcal {E}} ^ {*} (v) \ | _ {\ ell ^ {2 } (X)} \ leq {\ sqrt {B}} \ | v \ | _ {\ mathcal {H}}}$

are fulfilled. If even applies , then X is called a tight frame . ${\ displaystyle A = B}$

In particular, this property implies the existence of a continuous pseudo-inverse operator (see below).

#### Definition (Riesz basis)

A system X of vectors in a Hilbert space is called a Riesz basis if it is a Riesz system and a frame at the same time.

## Inferences

### For Riesz systems

#### Pseudoinverse and best approximation

A Riesz system X spans a closed subspace in the Hilbert space . For every vector there is a best approximation in this subspace, ie a coefficient vector for which the distance is minimal. This coefficient vector is determined to ${\ displaystyle {\ mathcal {H}}}$${\ displaystyle v \ in {\ mathcal {H}}}$${\ displaystyle c \ in \ ell ^ {2} (X)}$${\ displaystyle \ | v - {\ mathcal {E}} (c) \ | _ {\ mathcal {H}}}$

${\ displaystyle c = {\ mathcal {E}} ^ {-} (v) = \ left (({\ mathcal {E}} ^ {*} \ circ {\ mathcal {E}}) ^ {- 1} \ circ {\ mathcal {E}} ^ {*} \ right) (v)}$.

The inverse operator in this expression exists because the composite is bounded, self-adjoint, and positively definite. The inverse operator can be constructed as a Neumann series because it holds ${\ displaystyle {\ mathcal {E}} ^ {*} \ circ {\ mathcal {E}}}$

${\ displaystyle {\ mathcal {E}} ^ {*} \ circ {\ mathcal {E}} = C \, (IT)}$, therefore ,${\ displaystyle ({\ mathcal {E}} ^ {*} \ circ {\ mathcal {E}}) ^ {- 1} = {\ frac {1} {C}} \ left (I + T + \ sum _ {k = 2} ^ {\ infty} T ^ {k} \ right)}$

because the term

${\ displaystyle T = I - {\ tfrac {1} {C}} \ left ({\ mathcal {E}} ^ {*} \ circ {\ mathcal {E}} \ right)}$, with ,${\ displaystyle C: = {\ frac {A + B} {2}}}$

has an operator norm less than 1.

The operator is the pseudo-inverse operator to , the two identities apply ${\ displaystyle {\ mathcal {E}} ^ {-}: {\ mathcal {H}} \ to \ ell ^ {2} (X)}$${\ displaystyle {\ mathcal {E}}}$

• ${\ displaystyle {\ mathcal {E}} ^ {-} \ circ {\ mathcal {E}} = id _ {\ ell ^ {2} (X)}}$ is the identity in the space of the coefficient vectors and
• ${\ displaystyle {\ mathcal {E}} \ circ {\ mathcal {E}} ^ {-} = pr_ {im ({\ mathcal {E}})}}$is the orthogonal projector on the image .${\ displaystyle im ({\ mathcal {E}}) \ subset {\ mathcal {H}}}$

### For frames

#### Pseudoinverse

As a result of the frame inequality, the operator is surjective. Because the orthogonal complement of the image is precisely the kernel of , and according to the left inequality every vector in the kernel has the length zero. ${\ displaystyle {\ mathcal {E}}: \ ell ^ {2} (X) \ to {\ mathcal {H}}}$${\ displaystyle {\ mathcal {E}} ^ {*}: {\ mathcal {H}} \ to \ ell ^ {2} (X)}$

Analogous to the considerations for the Riesz system, the operator is now self-adjoint, restricted and positively definite. There is an inverse operator with which the pseudo-inverse operator can be formed. In this case, the identities apply ${\ displaystyle {\ mathcal {E}} \ circ {\ mathcal {E}} ^ {*}}$${\ displaystyle R = ({\ mathcal {E}} \ circ {\ mathcal {E}} ^ {*}) ^ {- 1}: {\ mathcal {H}} \ to {\ mathcal {H}}}$${\ displaystyle {\ mathcal {E}} ^ {-} = {\ mathcal {E}} ^ {*} \ circ R}$

• ${\ displaystyle {\ mathcal {E}} \ circ {\ mathcal {E}} ^ {-} = id _ {\ mathcal {H}}}$ is the identity of the Hilbert space and
• ${\ displaystyle {\ mathcal {E}} ^ {-} \ circ {\ mathcal {E}} = pr_ {im ({\ mathcal {E}} ^ {*})}}$is the projection on the image of the adjoint operator, which is also the orthogonal complement of the core .${\ displaystyle im ({\ mathcal {E}} ^ {*}) = ker ({\ mathcal {E}}) ^ {\ bot}}$

#### Smallest coefficient vector

With a frame X of each vector can as a linear combination of the system X are shown. Often there are several coefficient vectors that accomplish this task. Among all these coefficient vectors is the smallest. ${\ displaystyle v \ in {\ mathcal {H}}}$${\ displaystyle {\ mathcal {E}} ^ {-} (v)}$

#### Dual frame

There is a dual frame for a frame X , where R is the inverse operator defined above . This system is actually a frame of constants ; it is dual in the sense that identity can be developed into ${\ displaystyle RX = \ {Rx: \; x \ in X \}}$${\ displaystyle {\ mathcal {E}} \ circ {\ mathcal {E}} ^ {*}}$${\ displaystyle 0 <{\ tfrac {1} {B}} \ leq {\ tfrac {1} {A}} <\ infty}$${\ displaystyle v = ({\ mathcal {E}} \ circ {\ mathcal {E}} ^ {-}) (v)}$

${\ displaystyle v = \ sum _ {x \ in X} \ langle x, Rv \ rangle \, x = \ sum _ {x \ in X} \ langle Rx, v \ rangle \, x}$,

ie the scalar products with the vectors of the dual frame result in the components of the smallest coefficient vector for v .

#### Parseval frame

A tight frame X , whose frame constants are both equal to 1, is called a Parseval frame because it contains Parseval's equation

${\ displaystyle \ forall v \ in {\ mathcal {H}}: \; \ | v \ | ^ {2} = \ sum _ {x \ in X} | \ langle v, x \ rangle | ^ {2} }$

applies. This is equivalent to X being its own dual frame, i.e. any vector can be expanded as

${\ displaystyle v = \ sum _ {x \ in X} \ langle x, v \ rangle \, x}$.

The following applies: If the vectors of a parseval frame X are all unit vectors, then X is already a Hilbert basis.

### For Riesz bases

In a Riesz basis, the constants of the inequality from the definition of the Riesz system agree with the frame constants and the pseudo-inverse operator is actually already the inverse operator to . ${\ displaystyle 0 ${\ displaystyle {\ mathcal {E}} ^ {-}}$${\ displaystyle {\ mathcal {E}}}$

If A = B = 1 also applies, then X is already a complete orthonormal system , ie a Hilbert basis . In this case, Parseval's equation applies

${\ displaystyle \ forall v \ in H: \; \ | v \ | ^ {2} = \ sum _ {x \ in X} | \ langle v, x \ rangle | ^ {2}}$,

what is equivalent to

${\ displaystyle \ forall v \ in H: \; v = \ sum _ {x \ in X} \ langle v, x \ rangle \, x}$

is; as well as

${\ displaystyle \ forall c \ in \ ell _ {2} (X) \; \ forall y \ in X: \; c_ {y} = \ langle \ sum _ {x \ in X} c_ {x} \, x, y \ rangle}$,

equivalent to

${\ displaystyle \ forall c \ in \ ell _ {2} (X): \; \ | c \ | = \ | \ sum _ {x \ in X} c_ {x} \, x \ |}$.