# Orthogonal projection

Orthogonal projection of a point onto a plane : The connection vector between the point and its image forms a right angle with the plane.${\ displaystyle P}$${\ displaystyle E}$${\ displaystyle P '}$

An orthogonal projection (from gr. Ὀρθός orthós straight, γωνία gōnía angle and Latin prōicere , PPP prōiectum throw forward), orthogonal projection or vertical projection is a figure that is used in many areas of mathematics . In geometry , an orthogonal projection is the mapping of a point onto a straight line or a plane so that the connecting line between the point and its image forms a right angle with this straight line or plane . The image then has the shortest distance to the starting point of all points on the straight line or plane . An orthogonal projection is thus a special case of a parallel projection in which the projection direction is the same as the normal direction of the straight line or plane.

In linear algebra , this concept is extended to higher-dimensional vector spaces using real or complex numbers and more general terms of angles and distance . An orthogonal projection is then the projection of a vector onto a sub-vector space so that the difference vector from the image and the output vector lies in its orthogonal complement . In functional analysis , the term is understood even further in infinite-dimensional scalar product spaces and is particularly applied to functions . The projection set then ensures the existence and uniqueness of such orthogonal projections .

Orthogonal projections have a wide range of applications within mathematics, for example in representational geometry , the Gram-Schmidt orthogonalization method , the method of least squares , the method of conjugate gradients , Fourier analysis or best approximation . They have applications in cartography , architecture , computer graphics and physics, among others .

## Descriptive geometry

Principle of a three-panel projection

In descriptive geometry and technical drawing , projections are used to produce two-dimensional images of three-dimensional geometric bodies . In addition to the central projection , parallel projections are often used here. A parallel projection is an image that maps points in three-dimensional space onto points of a given image plane , the projection rays being parallel to one another . If the projection rays hit the projection plane at a right angle , one speaks of an orthogonal projection.

If, instead of one image plane, three projection planes are used that are perpendicular to one another, then it is a three-panel projection or normal projection . Usually the projection planes are parallel to the axes of the (Cartesian) coordinate system used . If a point in space then has the coordinates , the orthogonal projections of the point on the three coordinate planes are obtained ${\ displaystyle (x, y, z)}$

${\ displaystyle (x, y, z) \ rightarrow (x, y, 0)}$   (Projection onto the xy plane)
${\ displaystyle (x, y, z) \ rightarrow (x, 0, z)}$   (Projection onto the xz plane)
${\ displaystyle (x, y, z) \ rightarrow (0, y, z)}$   (Projection onto the yz plane)

If a projection plane runs parallel to two of the coordinate axes, but not through the zero point of the coordinate system, the projected point is obtained by replacing the value with the intersection of the plane with the third coordinate axis. In the case of orthogonal axonometry , for example isometry or dimetry , the object to be imaged is rotated in a specific way before projection. ${\ displaystyle 0}$

## Analytical geometry

The analytical geometry deals with the calculation and the mathematical properties of orthogonal projections in two- and three-dimensional space, especially for the case that the projection plane is not parallel to the coordinate axes.

### Projection on a straight line

Orthogonal projection of a point onto a straight line . The perpendicular is perpendicular to the straight line.${\ displaystyle P}$${\ displaystyle g}$${\ displaystyle {\ overline {PP '}}}$

#### definition

In the Euclidean plane , an orthogonal projection is the mapping of a point onto a straight line in such a way that the connecting line between the point and its image forms a right angle with the straight line. An orthogonal projection must therefore meet the two conditions ${\ displaystyle P}$ ${\ displaystyle g}$${\ displaystyle P '}$

• ${\ displaystyle P '\ in g}$   (Projection)
• ${\ displaystyle {\ overline {PP '}} \ perp g}$   (Orthogonality)

fulfill. The line is called the perpendicular of the point on the straight line and the projected point is called the base of the perpendicular. The graphic construction of the plumb bob with compass and ruler is a standard task of Euclidean geometry and one speaks of felling the plumb bob . ${\ displaystyle {\ overline {PP '}}}$${\ displaystyle P '}$

#### Derivation

Orthogonal projection of a vector onto a straight line with support vector and direction vector${\ displaystyle P_ {g} ({\ vec {x}})}$${\ displaystyle {\ vec {x}}}$${\ displaystyle g}$${\ displaystyle {\ vec {r}} _ {0}}$${\ displaystyle {\ vec {u}}}$

In analytical geometry, points in the Cartesian coordinate system are described by position vectors and straight lines are typically described as a straight line equation in parametric form , the position vector of a straight line point being the direction vector of the straight line and a real parameter . Two vectors and form a right angle if their is a scalar product . The orthogonal projection onto the straight line must meet the two conditions ${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} x_ {1} \\ x_ {2} \ end {pmatrix}}}$ ${\ displaystyle {\ vec {r}} = {\ vec {r}} _ {0} + \ lambda \, {\ vec {u}}}$${\ displaystyle {\ vec {r}} _ {0}}$${\ displaystyle {\ vec {u}}}$${\ displaystyle \ lambda}$${\ displaystyle {\ vec {x}}}$${\ displaystyle {\ vec {y}}}$ ${\ displaystyle {\ vec {x}} \ cdot {\ vec {y}} = 0}$${\ displaystyle P_ {g} ({\ vec {x}})}$${\ displaystyle g}$

• ${\ displaystyle P_ {g} ({\ vec {x}}) = {\ vec {r}} _ {0} + \ lambda \, {\ vec {u}}}$
• ${\ displaystyle (P_ {g} ({\ vec {x}}) - {\ vec {x}}) \ cdot {\ vec {u}} = 0}$

for a meet. ${\ displaystyle \ lambda \ in \ mathbb {R}}$

If the first equation is inserted into the second, one obtains

${\ displaystyle ({\ vec {r}} _ {0} + \ lambda \, {\ vec {u}} - {\ vec {x}}) \ cdot {\ vec {u}} = 0}$,

what after resolved ${\ displaystyle \ lambda}$

${\ displaystyle \ lambda = {\ frac {{\ vec {x}} \ cdot {\ vec {u}} - {\ vec {r}} _ {0} \ cdot {\ vec {u}}} {{ \ vec {u}} \ cdot {\ vec {u}}}} = {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {u }}} {{\ vec {u}} \ cdot {\ vec {u}}}}}$

results.

Extending the straight line as the origin line through the zero point , then applies and the formula simplifies to ${\ displaystyle {\ vec {r}} _ {0} = {\ begin {pmatrix} 0 \\ 0 \ end {pmatrix}}}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ frac {{\ vec {x}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}}}$.

Is also the direction vector of the straight line a unit vector , that is true , the result is the simpler representation ${\ displaystyle {\ vec {u}} \ cdot {\ vec {u}} = 1}$

${\ displaystyle P_ {g} ({\ vec {x}}) = ({\ vec {x}} \ cdot {\ vec {u}}) \, {\ vec {u}}}$.

The factor then indicates how far the projected point on the straight line is from the zero point. Similarly, a point in Euclidean space can also be projected orthogonally onto a straight line in space; only three components are used instead of two. ${\ displaystyle {\ vec {x}} \ cdot {\ vec {u}}}$${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} x_ {1} \\ x_ {2} \\ x_ {3} \ end {pmatrix}}}$

#### Examples

The orthogonal projection of the point with on the straight line through the origin with direction in the Euclidean plane is ${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} 4 \\ 3 \ end {pmatrix}}}$${\ displaystyle {\ vec {u}} = {\ begin {pmatrix} 1 \\ 2 \ end {pmatrix}}}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ frac {4 \ cdot 1 + 3 \ cdot 2} {1 \ cdot 1 + 2 \ cdot 2}} \ cdot {\ begin {pmatrix } 1 \\ 2 \ end {pmatrix}} = {\ frac {10} {5}} \ cdot {\ begin {pmatrix} 1 \\ 2 \ end {pmatrix}} = {\ begin {pmatrix} 2 \\ 4 \ end {pmatrix}}}$.

The orthogonal projection of the point with on the straight line through the origin with direction in Euclidean space is ${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} 3 \\ 9 \\ 6 \ end {pmatrix}}}$${\ displaystyle {\ vec {u}} = {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}}}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ frac {3 \ cdot 2 + 9 \ cdot 1 + 6 \ cdot 2} {2 \ cdot 2 + 1 \ cdot 1 + 2 \ cdot 2}} \ cdot {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}} = {\ frac {27} {9}} \ cdot {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}} = {\ begin {pmatrix} 6 \\ 3 \\ 6 \ end {pmatrix}}}$.

#### properties

If the point to be projected is already on the straight line, then there is a number with and the orthogonal projection ${\ displaystyle {\ vec {x}}}$${\ displaystyle \ lambda}$${\ displaystyle {\ vec {x}} = \ lambda {\ vec {u}}}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ frac {\ lambda {\ vec {u}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}} = {\ frac {\ lambda ({\ vec {u}} \ cdot {\ vec {u}})} {{\ vec {u }} \ cdot {\ vec {u}}}} \, {\ vec {u}} = \ lambda {\ vec {u}} = {\ vec {x}}}$

does not change the point. Otherwise , the orthogonal projection minimizes the distance between the starting point and all straight line points, since this distance is for the square according to the Pythagorean theorem

${\ displaystyle | \ lambda {\ vec {u}} - {\ vec {x}} | ^ {2} = | \ lambda {\ vec {u}} - P_ {g} ({\ vec {x}} ) | ^ {2} + | P_ {g} ({\ vec {x}}) - {\ vec {x}} | ^ {2} \ geq | P_ {g} ({\ vec {x}}) - {\ vec {x}} | ^ {2}}$

applies to all numbers . The minimum is unambiguously assumed at the orthogonally projected point, since the first term of the sum is exactly zero. If the vectors and form a right angle, the projected point is the zero point. ${\ displaystyle \ lambda \ in \ mathbb {R}}$${\ displaystyle \ lambda = {\ tfrac {{\ vec {x}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}}}$${\ displaystyle {\ vec {x}}}$${\ displaystyle {\ vec {u}}}$

#### calculation

The orthogonal projection of a point onto a straight line that is not a straight line through the origin is through ${\ displaystyle {\ vec {x}}}$${\ displaystyle g}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ vec {r}} _ {0} + {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}}}$.

given, which can be determined by inserting the general straight line equation into the orthogonality condition and by solving for the free parameter . By which one gets the above special cases from the general case, support vector of the line at the zero point shift is and its direction vector is normalized , is therefore by its amount is shared. In the example in the figure above , as well as and thus . ${\ displaystyle \ lambda}$${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} 5 \\ 2 \ end {pmatrix}}}$${\ displaystyle {\ vec {r}} _ {0} = {\ begin {pmatrix} 1 \\ 2 \ end {pmatrix}}}$${\ displaystyle {\ vec {u}} = {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}}}$${\ displaystyle P_ {g} ({\ vec {x}}) = {\ begin {pmatrix} 3 \\ 4 \ end {pmatrix}}}$

Alternatively, in the two-dimensional case, an orthogonal projection can also be calculated by determining the point of intersection of the starting line with the perpendicular line. If the starting line is a normal vector , it follows from the two conditions ${\ displaystyle {\ vec {n}}}$

• ${\ displaystyle P_ {g} ({\ vec {x}}) = {\ vec {x}} - \ mu \, {\ vec {n}}}$
• ${\ displaystyle (P_ {g} ({\ vec {x}}) - {\ vec {r}} _ {0}) \ cdot {\ vec {n}} = 0}$

by substituting the first equation into the second equation and solving for the free parameter for the orthogonal projection ${\ displaystyle \ mu}$

${\ displaystyle P_ {g} ({\ vec {x}}) = {\ vec {x}} - {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {n}}} {{\ vec {n}} \ cdot {\ vec {n}}}} \, {\ vec {n}}}$.

A normal vector can be determined by interchanging the two components of the direction vector of the straight line and by reversing the sign of one of the two components. In the example above there is one . Since a straight line in three-dimensional space does not have an excellent normal direction, this simple approach is only possible in two dimensions. ${\ displaystyle {\ vec {n}} = (1, -1)}$

### Projection onto a plane

Orthogonal projection of a point onto a plane . The perpendicular is perpendicular to all straight lines in the plane through the perpendicular base .${\ displaystyle P}$${\ displaystyle E}$${\ displaystyle {\ overline {PP '}}}$${\ displaystyle g}$${\ displaystyle P '}$

#### definition

In three-dimensional space, a point can also be projected orthogonally onto a plane . An orthogonal projection must then meet the two conditions ${\ displaystyle P}$ ${\ displaystyle E}$

• ${\ displaystyle P '\ in E}$   (Projection)
• ${\ displaystyle {\ overline {PP '}} \ perp E}$   (Orthogonality)

fulfill. Here, too, one speaks of plumb and plumb point. The orthogonality implies that the perpendicular is perpendicular to all straight lines of the plane through the perpendicular base . ${\ displaystyle P '}$

#### Derivation

Orthogonal projection of a vector onto a plane with support vector and orthogonal direction vectors and${\ displaystyle P_ {E} ({\ vec {x}})}$${\ displaystyle {\ vec {x}}}$${\ displaystyle E}$${\ displaystyle {\ vec {r}} _ {0}}$${\ displaystyle {\ vec {u}}}$${\ displaystyle {\ vec {v}}}$

A point in Euclidean space is again given by a position vector

${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} x_ {1} \\ x_ {2} \\ x_ {3} \ end {pmatrix}}}$
and let the plane be in parametric form
${\ displaystyle {\ vec {r}} = {\ vec {r}} _ {0} + \ lambda {\ vec {u}} + \ mu {\ vec {v}}}$
given, where and are real parameters and and are the span vectors of the plane, which must not be
collinear . ${\ displaystyle \ lambda}$${\ displaystyle \ mu}$${\ displaystyle {\ vec {u}}}$${\ displaystyle {\ vec {v}}}$

Due to the linearity of the scalar product, it is sufficient to prove orthogonality with respect to the two span vectors instead of with respect to all vectors of the plane.

If the plane is an origin plane , that is , the orthogonal projection of the point onto the plane must meet the following three conditions: ${\ displaystyle {\ vec {r}} _ {0} = {\ begin {pmatrix} 0 \\ 0 \\ 0 \ end {pmatrix}}}$${\ displaystyle P_ {E} ({\ vec {x}})}$${\ displaystyle {\ vec {x}}}$${\ displaystyle E}$

1. ${\ displaystyle P_ {E} ({\ vec {x}}) = \ lambda \, {\ vec {u}} + \ mu \, {\ vec {v}}}$
2. ${\ displaystyle (P_ {E} ({\ vec {x}}) - {\ vec {x}}) \ cdot {\ vec {u}} = 0}$
3. ${\ displaystyle (P_ {E} ({\ vec {x}}) - {\ vec {x}}) \ cdot {\ vec {v}} = 0}$

Substituting the first equation into the other two equations yields with

{\ displaystyle {\ begin {aligned} \ lambda \, {\ vec {u}} \ cdot {\ vec {u}} + \ mu \, {\ vec {v}} \ cdot {\ vec {u}} & = {\ vec {x}} \ cdot {\ vec {u}} \\\ lambda \, {\ vec {u}} \ cdot {\ vec {v}} + \ mu \, {\ vec {v }} \ cdot {\ vec {v}} & = {\ vec {x}} \ cdot {\ vec {v}} \ end {aligned}}}

a system of linear equations with two equations and the two unknowns and . If the span vectors are orthogonal to one another , that is to say , this system of equations breaks down into two independent equations and its solution can be given directly. The orthogonal projection of the point on the plane is then given by: ${\ displaystyle \ lambda}$${\ displaystyle \ mu}$${\ displaystyle {\ vec {u}} \ cdot {\ vec {v}} = {\ vec {v}} \ cdot {\ vec {u}} = 0}$${\ displaystyle P_ {E} ({\ vec {x}})}$${\ displaystyle {\ vec {x}}}$${\ displaystyle E}$

${\ displaystyle P_ {E} ({\ vec {x}}) = {\ frac {{\ vec {x}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}} + {\ frac {{\ vec {x}} \ cdot {\ vec {v}}} {{\ vec {v}} \ cdot {\ vec {v}}}} \, {\ vec {v}}}$

If the span vectors are even orthonormal , that is, if the following applies in addition , then one has the simpler representation ${\ displaystyle {\ vec {u}} \ cdot {\ vec {u}} = {\ vec {v}} \ cdot {\ vec {v}} = 1}$

${\ displaystyle P_ {E} ({\ vec {x}}) = ({\ vec {x}} \ cdot {\ vec {u}}) \, {\ vec {u}} + ({\ vec { x}} \ cdot {\ vec {v}}) \, {\ vec {v}}}$
.

This gives the orthogonal projection of a point on a plane that is, by determining the orthogonal and of the point on the two straight lines of the chip vectors formed and and by summing the results (see Figure). ${\ displaystyle P_ {g} ({\ vec {x}})}$${\ displaystyle P_ {h} ({\ vec {x}})}$${\ displaystyle g}$${\ displaystyle h}$

#### example

The orthogonal projection of the point to the original plane defined by the orthogonal vectors , and is spanned is, ${\ displaystyle {\ vec {x}} = {\ begin {pmatrix} 3 \\ 9 \\ 6 \ end {pmatrix}}}$${\ displaystyle {\ vec {u}} = {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}}}$${\ displaystyle {\ vec {v}} = {\ begin {pmatrix} 2 \\ - 2 \\ - 1 \ end {pmatrix}}}$

${\ displaystyle P_ {E} ({\ vec {x}}) = {\ frac {6 + 9 + 12} {4 + 1 + 4}} \ cdot {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}} + {\ frac {6-18-6} {4 + 4 + 1}} \ cdot {\ begin {pmatrix} 2 \\ - 2 \\ - 1 \ end {pmatrix}} = { \ begin {pmatrix} 6 \\ 3 \\ 6 \ end {pmatrix}} + {\ begin {pmatrix} -4 \\ 4 \\ 2 \ end {pmatrix}} = {\ begin {pmatrix} 2 \\ 7 \\ 8 \ end {pmatrix}}}$.

#### properties

If the point to be projected is already on the plane, there are numbers and with and the orthogonal projection ${\ displaystyle {\ vec {x}}}$${\ displaystyle \ lambda}$${\ displaystyle \ mu}$${\ displaystyle {\ vec {x}} = \ lambda {\ vec {u}} + \ mu {\ vec {v}}}$

${\ displaystyle P_ {E} ({\ vec {x}}) = {\ frac {\ lambda ({\ vec {u}} \ cdot {\ vec {u}}) + \ mu ({\ vec {v }} \ cdot {\ vec {u}})} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}} + {\ frac {\ lambda ({ \ vec {u}} \ cdot {\ vec {v}}) + \ mu ({\ vec {v}} \ cdot {\ vec {v}})} {{\ vec {v}} \ cdot {\ vec {v}}}} \, {\ vec {v}} = \ lambda {\ vec {u}} + \ mu {\ vec {v}} = {\ vec {x}}}$

does not change the point. Otherwise, the orthogonally projected point minimizes the distance between the starting point and all points of the plane, since this distance is for the square with the Pythagorean theorem

${\ displaystyle | (\ lambda {\ vec {u}} + \ mu {\ vec {v}}) - {\ vec {x}} | ^ {2} = | (\ lambda {\ vec {u}} + \ mu {\ vec {v}}) - P_ {E} ({\ vec {x}}) | ^ {2} + | P_ {E} ({\ vec {x}}) - {\ vec { x}} | ^ {2} \ geq | P_ {E} ({\ vec {x}}) - {\ vec {x}} | ^ {2}}$

applies to all numbers . The minimum is clearly assumed for and at the orthogonally projected point. Forms a right angle both with and with , then the projected point is the zero point. ${\ displaystyle \ lambda, \ mu \ in \ mathbb {R}}$${\ displaystyle \ lambda = {\ tfrac {{\ vec {x}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}}}$${\ displaystyle \ mu = {\ tfrac {{\ vec {x}} \ cdot {\ vec {v}}} {{\ vec {v}} \ cdot {\ vec {v}}}}}$${\ displaystyle {\ vec {x}}}$${\ displaystyle {\ vec {u}}}$${\ displaystyle {\ vec {v}}}$

#### calculation

If a plane does not pass through the origin, it can be moved into the origin by translation . If their span vectors are not orthogonal, they can be orthogonalized using the Gram-Schmidt orthogonalization method. For this purpose, one determines (for example) a vector that is too orthogonal as a connection vector from to the orthogonal projection from onto the straight line in the direction${\ displaystyle {\ vec {r}} _ {0}}$${\ displaystyle {\ vec {u}}}$${\ displaystyle {\ vec {w}}}$${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {u}}}$

${\ displaystyle {\ vec {w}} = {\ vec {v}} - {\ frac {{\ vec {v}} \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}}}$

and thus is given the general case of an orthogonal projection of a point onto a plane by ${\ displaystyle {\ vec {x}}}$${\ displaystyle E}$

${\ displaystyle P_ {E} ({\ vec {x}}) = {\ vec {r}} _ {0} + {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {u}}} {{\ vec {u}} \ cdot {\ vec {u}}}} \, {\ vec {u}} + {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {w}}} {{\ vec {w}} \ cdot {\ vec {w}}}} \, { \ vec {w}}}$.

Alternatively, an orthogonal projection can also be calculated by calculating the intersection of the perpendicular line with the plane. A normal vector of the plane can, if it is not given in normal form , be calculated via the cross product of the (not necessarily orthogonal, but non-collinear) span vectors through . As in the two-dimensional case, one then obtains as an orthogonal projection ${\ displaystyle {\ vec {n}}}$${\ displaystyle {\ vec {n}} = {\ vec {u}} \ times {\ vec {v}}}$

${\ displaystyle P_ {E} ({\ vec {x}}) = {\ vec {x}} - {\ frac {({\ vec {x}} - {\ vec {r}} _ {0}) \ cdot {\ vec {n}}} {{\ vec {n}} \ cdot {\ vec {n}}}} \, {\ vec {n}}}$.

## Linear Algebra

In linear algebra , the concept of orthogonal projection is generalized to general vector spaces with finite dimensions over the field of real or complex numbers, as well as general scalar products and thus orthogonality terms . By definition, two vectors are orthogonal if and only if their is a scalar product . ${\ displaystyle V}$ ${\ displaystyle n}$ ${\ displaystyle \ mathbb {K}}$ ${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle v, w \ in V}$${\ displaystyle \ langle v, w \ rangle = 0}$

### Algebraic representation

#### definition

An orthogonal projection onto a subspace of a vector space is a linear mapping that has the two properties for all vectors${\ displaystyle U}$${\ displaystyle V}$ ${\ displaystyle P_ {U} \ colon V \ rightarrow V}$${\ displaystyle v \ in V}$

• ${\ displaystyle P_ {U} (v) \ in U}$   (Projection)
• ${\ displaystyle \ langle P_ {U} (v) -v, u \ rangle = 0}$   for all     (orthogonality)${\ displaystyle u \ in U}$

Fulfills. The difference vector is thus in the orthogonal complement of . The orthogonal complement is itself a subspace consisting of those vectors in which are orthogonal to all vectors in . ${\ displaystyle P_ {U} (v) -v}$ ${\ displaystyle U ^ {\ perp}}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle U}$

#### presentation

If there is a basis of the subspace with the dimension , then each vector has a unique representation as a linear combination . Due to the sesquilinearity of the scalar product, it is therefore sufficient to prove orthogonality only with respect to the basis vectors instead of with respect to all vectors of the subspace. An orthogonal projection must therefore meet the conditions ${\ displaystyle \ {u_ {1}, \ ldots, u_ {k} \}}$${\ displaystyle U}$${\ displaystyle k}$${\ displaystyle u \ in U}$ ${\ displaystyle u = c_ {1} u_ {1} + \ ldots + c_ {k} u_ {k}}$${\ displaystyle P_ {U}}$

• ${\ displaystyle P_ {U} (v) = \ sum _ {i = 1} ^ {k} c_ {i} \, u_ {i}}$
• ${\ displaystyle \ langle P_ {U} (v) -v, u_ {j} \ rangle = 0}$   For   ${\ displaystyle j = 1, \ ldots, k}$

fulfill. If you put the first equation in the other equations, you get with

${\ displaystyle \ sum _ {i = 1} ^ {k} c_ {i} \, \ langle u_ {i}, u_ {j} \ rangle = \ langle v, u_ {j} \ rangle}$   For   ${\ displaystyle j = 1, \ ldots, k}$

a linear system of equations with equations and the unknowns . The Gram's matrix on which it is based is regular due to the linear independence of the basis vectors and this system of equations can therefore be uniquely solved. If now is an orthogonal basis of , that is, for , then the associated Gram's matrix is ​​a diagonal matrix and the system of equations has a directly specifiable solution. The orthogonal projection of the vector onto the sub-vector space is then through ${\ displaystyle k}$${\ displaystyle k}$${\ displaystyle c_ {1}, \ ldots, c_ {k}}$ ${\ displaystyle (\, \ langle u_ {i}, u_ {j} \ rangle \,) _ {i, j}}$${\ displaystyle \ {u_ {1}, \ ldots, u_ {k} \}}$${\ displaystyle U}$${\ displaystyle \ langle u_ {i}, u_ {j} \ rangle = 0}$${\ displaystyle i \ neq j}$${\ displaystyle P_ {U}}$${\ displaystyle v}$${\ displaystyle U}$

${\ displaystyle P_ {U} (v) = \ sum _ {i = 1} ^ {k} {\ frac {\ langle v, u_ {i} \ rangle} {\ langle u_ {i}, u_ {i} \ rangle}} u_ {i}}$

given. If it even forms an orthonormal basis , i.e. with the Kronecker delta , then the orthogonal projection has the simpler representation ${\ displaystyle \ {u_ {1}, \ ldots, u_ {k} \}}$${\ displaystyle \ langle u_ {i}, u_ {j} \ rangle = \ delta _ {ij}}$ ${\ displaystyle \ delta _ {ij}}$

${\ displaystyle P_ {U} (v) = \ sum _ {i = 1} ^ {k} \ langle v, u_ {i} \ rangle u_ {i}}$.

#### Examples

Is chosen as the vector space the standard space and as a scalar product of the standard scalar , then a subspace a linear manifold (such as a line, plane or hyper-plane ) through the zero point, and the orthogonal projections of the previous geometry portion just correspond to special cases ${\ displaystyle V}$ ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$

• Projection onto a straight line through the origin in the plane: ${\ displaystyle n = 2, k = 1}$
• Projection onto a straight line through the origin in space: ${\ displaystyle n = 3, k = 1}$
• Projection onto an original plane in space: ${\ displaystyle n = 3, k = 2}$

The case corresponds to the mapping of a vector to the zero point in every dimension and the case always leaves the vector unchanged, since an orthogonal projection is then the identical mapping . ${\ displaystyle k = 0}$${\ displaystyle k = n}$

#### properties

An orthogonal projection is a projection , that is, an idempotent linear mapping of the vector space in itself (called endomorphism ). If the vector to be projected is already an element of the subspace, then there are scalars such that is, and the orthogonal projection ${\ displaystyle V}$${\ displaystyle v}$${\ displaystyle c_ {1}, \ ldots, c_ {k}}$${\ displaystyle v = c_ {1} u_ {1} + \ ldots + c_ {k} u_ {k}}$

${\ displaystyle P_ {U} (v) = \ sum _ {i = 1} ^ {k} \ left \ langle \ sum _ {j = 1} ^ {k} c_ {j} u_ {j}, u_ { i} \ right \ rangle \, u_ {i} = \ sum _ {i = 1} ^ {k} \ sum _ {j = 1} ^ {k} c_ {j} \ langle u_ {j}, u_ { i} \ rangle \, u_ {i} = c_ {1} u_ {1} + \ ldots + c_ {k} u_ {k} = v}$

does not change the vector, from which the idempotency follows. The linearity of the figure follows directly from the sesquilinearity of the scalar product. In addition, self-adjointness applies

${\ displaystyle \ langle P_ {U} (v), w \ rangle = \ left \ langle \ sum _ {i = 1} ^ {k} \ langle v, u_ {i} \ rangle u_ {i}, w \ right \ rangle = \ sum _ {i = 1} ^ {k} \ langle v, u_ {i} \ rangle \ langle u_ {i}, w \ rangle = \ left \ langle v, \ sum _ {i = 1 } ^ {k} \ langle w, u_ {i} \ rangle u_ {i} \ right \ rangle = \ langle v, P_ {U} (w) \ rangle}$

for all vectors . The orthogonally projected vector minimizes the distance between the output vector and all vectors of the sub-vector space with respect to the norm derived from the scalar product , because it applies with the Pythagorean theorem for scalar product spaces${\ displaystyle v, w \ in V}$ ${\ displaystyle \ | \ cdot \ |}$

${\ displaystyle \ | uv \ | ^ {2} = \ | u-P_ {U} (v) \ | ^ {2} + \ | P_ {U} (v) -v \ | ^ {2} \ geq \ | P_ {U} (v) -v \ | ^ {2}}$.

for everyone . The minimum is unambiguously assumed on the orthogonally projected vector. If the vector lies in the orthogonal complement of the subspace, then the projected vector is the zero vector . ${\ displaystyle u \ in U}$${\ displaystyle v}$

#### General case

Is the basis of the subspace not orthogonal, it can with the Gram-Schmidt orthogonalization orthogonalized and so an orthogonal of are obtained. Furthermore, a vector can also be projected onto an affine subspace with orthogonal. The general case of an orthogonal projection of a vector onto an affine subspace is then obtained ${\ displaystyle \ {u_ {1}, \ ldots, u_ {k} \}}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {k} \}}$${\ displaystyle U}$ ${\ displaystyle U_ {0} = r_ {0} + U}$${\ displaystyle r_ {0} \ in V}$${\ displaystyle v}$${\ displaystyle U_ {0}}$

${\ displaystyle P_ {U_ {0}} (v) = r_ {0} + \ sum _ {i = 1} ^ {k} {\ frac {\ langle v-r_ {0}, w_ {i} \ rangle } {\ langle w_ {i}, w_ {i} \ rangle}} w_ {i}}$.

#### Complementary representation

If there is an orthogonal complementary basis of , that is, an orthogonal basis of the complement , then one obtains due to ${\ displaystyle \ {w_ {k + 1}, \ ldots, w_ {n} \}}$${\ displaystyle U}$${\ displaystyle U ^ {\ perp}}$

${\ displaystyle v = \ sum _ {i = 1} ^ {k} {\ frac {\ langle v, w_ {i} \ rangle} {\ langle w_ {i}, w_ {i} \ rangle}} w_ { i} + \ sum _ {i = k + 1} ^ {n} {\ frac {\ langle v, w_ {i} \ rangle} {\ langle w_ {i}, w_ {i} \ rangle}} w_ { i} = P_ {U} (v) + P_ {U ^ {\ perp}} (v)}$

the complementary representation of an orthogonal projection onto an affine subspace as ${\ displaystyle P_ {U_ {0}}}$${\ displaystyle U_ {0} = r_ {0} + U}$

${\ displaystyle P_ {U_ {0}} (v) = v- \ sum _ {i = k + 1} ^ {n} {\ frac {\ langle v-r_ {0}, w_ {i} \ rangle} {\ langle w_ {i}, w_ {i} \ rangle}} w_ {i}}$.

### Matrix display

#### Coordinates

If one chooses an orthonormal basis with respect to the scalar product for the vector space , then every vector can be used as a coordinate vector over ${\ displaystyle V}$${\ displaystyle \ {e_ {1}, \ ldots, e_ {n} \}}$${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle v \ in V}$${\ displaystyle x = (x_ {1}, \ ldots, x_ {n}) ^ {T} \ in {\ mathbb {K}} ^ {n}}$

${\ displaystyle v = \ sum _ {i = 1} ^ {n} x_ {i} e_ {i}}$   With   ${\ displaystyle x_ {i} = \ langle v, e_ {i} \ rangle}$

being represented. The coordinates are exactly the lengths of the orthogonal projections of the vector onto the basis vectors. In coordinate representation, the scalar product of two vectors is then the standard scalar product of the associated coordinate vectors , the adjoint vector (in the real case the transposed vector) being zu . ${\ displaystyle x_ {1}, \ ldots, x_ {n}}$${\ displaystyle \ langle v, w \ rangle}$${\ displaystyle v, w \ in V}$ ${\ displaystyle y ^ {H} x}$${\ displaystyle x, y \ in {\ mathbb {K}} ^ {n}}$${\ displaystyle y ^ {H}}$${\ displaystyle y}$

#### presentation

If the coordinate vectors of an orthogonal base are now a sub-vector space and the coordinate vector of a vector to be projected , then the coordinate representation is an orthogonal projection ${\ displaystyle y_ {1}, \ ldots, y_ {k} \ in {\ mathbb {K}} ^ {n}}$${\ displaystyle \ {u_ {1}, \ ldots, u_ {k} \}}$${\ displaystyle U}$${\ displaystyle x}$${\ displaystyle v}$

${\ displaystyle P_ {U} (v) = \ sum _ {i = 1} ^ {k} {\ frac {y_ {i} ^ {H} x} {y_ {i} ^ {H} y_ {i} }} y_ {i} = \ sum _ {i = 1} ^ {k} {\ frac {y_ {i} y_ {i} ^ {H}} {y_ {i} ^ {H} y_ {i}} } x}$.

In coordinate representation, an orthogonal projection is thus simply given by a matrix-vector product with the mapping matrix${\ displaystyle Q_ {U} x}$ ${\ displaystyle Q_ {U} \ in {\ mathbb {K}} ^ {n \ times n}}$

${\ displaystyle Q_ {U} = \ sum _ {i = 1} ^ {k} {\ frac {y_ {i} y_ {i} ^ {H}} {y_ {i} ^ {H} y_ {i} }}}$.

If the coordinate vectors are an orthonormal basis of , the orthogonal projection matrix has the simpler representation ${\ displaystyle y_ {1}, \ ldots, y_ {k}}$${\ displaystyle U}$${\ displaystyle Q_ {U}}$

${\ displaystyle Q_ {U} = \ sum _ {i = 1} ^ {k} y_ {i} y_ {i} ^ {H}}$.

Each summand is the dyadic product of a coordinate vector with itself.

#### Examples

In the coordinate space the orthogonal projection matrix on the straight line through the origin with direction is given by ${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle y_ {1} = (2,1,2) ^ {T}}$

${\ displaystyle Q_ {U} = {\ frac {1} {9}} {\ begin {pmatrix} 2 \\ 1 \\ 2 \ end {pmatrix}} {\ begin {pmatrix} 2 & 1 & 2 \ end {pmatrix}} = {\ frac {1} {9}} {\ begin {pmatrix} 4 & 2 & 4 \\ 2 & 1 & 2 \\ 4 & 2 & 4 \ end {pmatrix}}}$.

The orthogonal projection matrix on the original plane, which is spanned by and , is corresponding ${\ displaystyle y_ {1} = (2,1,2) ^ {T}}$${\ displaystyle y_ {2} = (2, -2, -1) ^ {T}}$

${\ displaystyle Q_ {U} = {\ frac {1} {9}} {\ begin {pmatrix} 4 & 2 & 4 \\ 2 & 1 & 2 \\ 4 & 2 & 4 \ end {pmatrix}} + {\ frac {1} {9}} {\ begin {pmatrix} 4 & -4 & -2 \\ - 4 & 4 & 2 \\ - 2 & 2 & 1 \ end {pmatrix}} = {\ frac {1} {9}} {\ begin {pmatrix} 8 & -2 & 2 \\ - 2 & 5 & 4 \\ 2 & 4 & 5 \ end {pmatrix}}}$.

#### properties

An orthogonal projection matrix is idempotent , that is, it holds

${\ displaystyle (Q_ {U}) ^ {2} = \ left (\ sum _ {i = 1} ^ {k} y_ {i} y_ {i} ^ {H} \ right) \ left (\ sum _ {j = 1} ^ {k} y_ {j} y_ {j} ^ {H} \ right) = \ sum _ {i = 1} ^ {k} \ sum _ {j = 1} ^ {k} ( y_ {i} y_ {i} ^ {H}) (y_ {j} y_ {j} ^ {H}) = \ sum _ {i = 1} ^ {k} \ sum _ {j = 1} ^ { k} y_ {i} (y_ {i} ^ {H} y_ {j}) y_ {j} ^ {H} = \ sum _ {i = 1} ^ {k} y_ {i} y_ {i} ^ {H} = Q_ {U}}$.

Furthermore, it is self-adjoint ( symmetrical in the real case ), there

${\ displaystyle (Q_ {U}) ^ {H} = \ left (\ sum _ {i = 1} ^ {k} y_ {i} y_ {i} ^ {H} \ right) ^ {H} = \ sum _ {i = 1} ^ {k} (y_ {i} y_ {i} ^ {H}) ^ {H} = \ sum _ {i = 1} ^ {k} y_ {i} y_ {i} ^ {H} = Q_ {U}}$

is. The following applies to the rank and the trace of an orthogonal projection matrix

${\ displaystyle \ operatorname {rank} Q_ {U} = \ operatorname {spur} Q_ {U} = k}$,

because rank and trace match for idempotent matrices and the individual matrices each have rank one. The eigenvalues ​​of an orthogonal projection matrix are and , where the associated eigenspaces are precisely the sub-vector space and its orthogonal complement . The spectral norm of an orthogonal projection matrix is ​​therefore equal to one , unless the zero vector space is. ${\ displaystyle y_ {i} y_ {i} ^ {H}}$${\ displaystyle \ lambda _ {1} = \ ldots = \ lambda _ {k} = 1}$${\ displaystyle \ lambda _ {k + 1} = \ ldots = \ lambda _ {n} = 0}$${\ displaystyle U}$${\ displaystyle U ^ {\ perp}}$${\ displaystyle U}$ ${\ displaystyle \ {0 \}}$

#### General case

If the coordinate vectors form a basis, but not an orthogonal basis of the sub-vector space, then they can be orthogonalized to calculate an orthogonal projection or a corresponding linear system of equations can be solved. If the basis vectors are combined in columns to form a matrix , then this system of equations has the form of the normal equations${\ displaystyle y_ {1}, \ ldots, y_ {k}}$${\ displaystyle A = (y_ {1}, \ ldots, y_ {k}) \ in \ mathbb {K} ^ {n \ times k}}$

${\ displaystyle A ^ {H} Ac = A ^ {H} x}$

with the coefficient vector . The matrix representation of an orthogonal projection is then given by ${\ displaystyle c = (c_ {1}, \ ldots, c_ {k}) ^ {T}}$${\ displaystyle Q_ {U} x = Ac}$

${\ displaystyle Q_ {U} = A (A ^ {H} A) ^ {- 1} A ^ {H}}$.

This matrix is ​​widely used in statistics (see projection matrix (statistics) ). An orthogonal projection onto an affine subspace is then the affine mapping in a matrix representation${\ displaystyle U_ {0} = r_ {0} + U}$

${\ displaystyle P_ {U_ {0}} (v) = Q_ {U} x + (I-Q_ {U}) s}$

with the identity matrix and with as the coordinate vector of . Using homogeneous coordinates , each orthogonal projection can also be represented as a simple matrix-vector product. ${\ displaystyle I}$${\ displaystyle s = (s_ {1}, \ ldots, s_ {n}) ^ {T}}$${\ displaystyle r_ {0}}$

#### Complementary representation

An orthogonal projection onto an affine subspace has the complementary matrix representation ${\ displaystyle U_ {0}}$

${\ displaystyle P_ {U_ {0}} (v) = (I-Q_ {U ^ {\ perp}}) x + Q_ {U ^ {\ perp}} s}$

with the orthogonal projection matrix on the complementary space given by ${\ displaystyle Q_ {U ^ {\ perp}} \ in {\ mathbb {K}} ^ {n \ times n}}$

${\ displaystyle Q_ {U ^ {\ perp}} = I-Q_ {U}}$.

If the coordinate vectors form an orthogonal basis of the complementary space , the complementary orthogonal projection matrix has the representation ${\ displaystyle y_ {k + 1}, \ ldots, y_ {n}}$${\ displaystyle U ^ {\ perp}}$

${\ displaystyle Q_ {U ^ {\ perp}} = \ sum _ {i = k + 1} ^ {n} {\ frac {y_ {i} y_ {i} ^ {H}} {y_ {i} ^ {H} y_ {i}}}}$.

## Functional analysis

In functional analysis , the concept of orthogonal projection is generalized to infinite-dimensional scalar product spaces over real or complex numbers and is particularly applied to function spaces .

### definition

If is a scalar product space and is a subspace of with orthogonal complement , then an orthogonal projection is an operator (also called orthogonal projector ) with both properties ${\ displaystyle (V, \ langle \ cdot, \ cdot \ rangle)}$${\ displaystyle U}$${\ displaystyle V}$ ${\ displaystyle U ^ {\ perp}}$ ${\ displaystyle P_ {U} \ colon V \ rightarrow V}$

• ${\ displaystyle \ operatorname {im} P_ {U} = U}$   (Projection)
• ${\ displaystyle \ operatorname {ker} P_ {U} = U ^ {\ perp}}$   (Orthogonality)

where is the image and the core of the operator. The complementary operator then has both an image and a core . ${\ displaystyle \ operatorname {im}}$${\ displaystyle \ operatorname {ker}}$${\ displaystyle I-P_ {U}}$${\ displaystyle U ^ {\ perp}}$${\ displaystyle U}$

### Existence and uniqueness

Orthogonal decomposition of a vector into a part in a plane and a part in the orthogonal complement of the plane${\ displaystyle v}$${\ displaystyle u}$${\ displaystyle U}$${\ displaystyle u ^ {\ perp}}$${\ displaystyle U ^ {\ perp}}$

In order for such orthogonal projections to exist and be unambiguous, however, the spaces under consideration must be restricted. If a Hilbert space , i.e. a complete scalar product space , and is a closed sub-vector space of , then the projection theorem ensures the existence and uniqueness of orthogonal projections. For each vector there are then unique vectors and , so that this vector is the representation ${\ displaystyle V}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle v \ in V}$${\ displaystyle u \ in U}$${\ displaystyle u ^ {\ perp} \ in U ^ {\ perp}}$

${\ displaystyle v = u + u ^ {\ perp}}$

owns. Thus, and form an orthogonal decomposition of , that is, the entire space can be represented as an orthogonal sum . A finite-dimensional sub-vector space is always closed and the completeness of can then also be dispensed with. ${\ displaystyle U}$${\ displaystyle U ^ {\ perp}}$${\ displaystyle V}$${\ displaystyle V}$ ${\ displaystyle U \ oplus U ^ {\ perp}}$${\ displaystyle V}$

### presentation

Every Hilbert space has an orthonormal basis , which, however, cannot always be stated explicitly. If, however, a separable Hilbert space, then such an orthonormal basis can be counted as a shudder basis , so that each vector is in a series ${\ displaystyle V}$${\ displaystyle \ {e_ {1}, e_ {2}, \ ldots \}}$${\ displaystyle v \ in V}$

${\ displaystyle v = \ sum _ {i = 1} ^ {\ infty} \ langle v, e_ {i} \ rangle e_ {i}}$

can be developed. Such an orthonormal basis can always be obtained from a linearly independent subset of with the aid of the Gram-Schmidt orthogonalization method. If there is now a (also countable) orthonormal basis of a closed sub-vector space , then an orthogonal projection has the series representation ${\ displaystyle V}$${\ displaystyle \ {u_ {1}, u_ {2}, \ ldots \}}$${\ displaystyle U}$

${\ displaystyle P_ {U} = \ sum _ {i = 1} ^ {\ infty} \ langle \ cdot, u_ {i} \ rangle u_ {i}}$.

This representation can also be generalized to non-separable, i.e. uncountable-dimensional Hilbert spaces. If a closed sub-vector space of a general Hilbert space and an orthonormal basis of this sub-vector space with an arbitrary index set , then an orthogonal projection has the corresponding representation ${\ displaystyle U}$${\ displaystyle \ {u_ {i} \} _ {i \ in I}}$ ${\ displaystyle I}$

${\ displaystyle P_ {U} = \ sum _ {i \ in I} \ langle \ cdot, u_ {i} \ rangle u_ {i}}$,

where only a countable number of sum terms of this sum are not equal to zero. These series are unconditionally convergent according to Bessel's inequality and according to Parseval's equation each element is actually mapped onto itself. ${\ displaystyle U}$

### example

L 2 -Best approximations of the exponential function in the interval using constant, linear and quadratic polynomials${\ displaystyle [-1.1]}$

The space L 2 of the quadratically integrable real functions in the interval with the L 2 scalar product is given ${\ displaystyle [-1.1]}$

${\ displaystyle \ langle f, g \ rangle = \ int _ {- 1} ^ {1} f (x) g (x) ~ dx}$

For this space, the Legendre polynomials form a complete orthogonal system . We are now looking for the orthogonal projection of the exponential function onto the subspace of the linear functions . For this subspace, the two monomials form an orthogonal basis, which after normalization is the orthonormal basis ${\ displaystyle f (x) = e ^ {x}}$ ${\ displaystyle \ {1, x \}}$

${\ displaystyle \ {u_ {1} (x) = {\ tfrac {1} {\ sqrt {2}}}, u_ {2} (x) = {\ tfrac {\ sqrt {3}} {\ sqrt { 2}}} \, x \}}$

results. The orthogonal projection of onto the subspace of the linear functions is then given by ${\ displaystyle f}$

${\ displaystyle P_ {U} f = \ langle f, u_ {1} \ rangle u_ {1} + \ langle f, u_ {2} \ rangle u_ {2} = {\ tfrac {1} {\ sqrt {2 }}} (e - {\ tfrac {1} {e}}) \ cdot {\ tfrac {1} {\ sqrt {2}}} + {\ tfrac {\ sqrt {6}} {e}} \ cdot {\ tfrac {\ sqrt {3}} {\ sqrt {2}}} x = {\ tfrac {1} {2}} (e - {\ tfrac {1} {e}}) + {\ tfrac {3 }{ex}$.

### properties

If a Hilbert space and a closed subspace of , then is a continuous linear operator with the following properties: ${\ displaystyle V}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle P_ {U}}$

• ${\ displaystyle P_ {U}}$is a projection , that is .${\ displaystyle P_ {U} ^ {2} = P_ {U}}$
• ${\ displaystyle P_ {U}}$is self-adjoint , i.e. with the adjoint operator .${\ displaystyle P_ {U} = P_ {U} ^ {\ ast}}$ ${\ displaystyle P_ {U} ^ {\ ast}}$
• ${\ displaystyle P_ {U}}$is normal , that is .${\ displaystyle P_ {U} ^ {\ ast} P_ {U} = P_ {U} P_ {U} ^ {\ ast}}$
• ${\ displaystyle P_ {U}}$is positive , that is, especially for everyone .${\ displaystyle \ langle P_ {U} v, v \ rangle \ geq 0}$${\ displaystyle v \ in V}$
• ${\ displaystyle P_ {U}}$is a partial isometric drawing where the isometric part is the identity.
• ${\ displaystyle P_ {U}}$is compact if and only if is finitely dimensional.${\ displaystyle U}$
• ${\ displaystyle P_ {U} v}$is best approximation in the scalar product norm , that is .${\ displaystyle \ | P_ {U} vv \ | = \ inf _ {u \ in U} \ | uv \ |}$
• ${\ displaystyle \ | P_ {U} \ | = 1}$, if , and , if (in the operator norm ).${\ displaystyle U \ neq \ {0 \}}$${\ displaystyle \ | I-P_ {U} \ | = 1}$${\ displaystyle U \ neq V}$

Conversely, a continuous linear projection that is self-adjoint or normal or positive or normalized to one is an orthogonal projection onto the image space . ${\ displaystyle P \ neq 0}$${\ displaystyle P (V)}$

## Applications

Orthogonal projections have a wide variety of applications, only a few of which are highlighted here:

geometry
Linear Algebra
Functional analysis
Statistics and Probability Theory

2. An equivalent condition is for all .${\ displaystyle \ langle P_ {U} (v), u \ rangle = \ langle v, u \ rangle}$${\ displaystyle u \ in U}$