Orthogonal projection
An orthogonal projection (from gr. Ὀρθός orthós straight, γωνία gōnía angle and Latin prōicere , PPP prōiectum throw forward), orthogonal projection or vertical projection is a figure that is used in many areas of mathematics . In geometry , an orthogonal projection is the mapping of a point onto a straight line or a plane so that the connecting line between the point and its image forms a right angle with this straight line or plane . The image then has the shortest distance to the starting point of all points on the straight line or plane . An orthogonal projection is thus a special case of a parallel projection in which the projection direction is the same as the normal direction of the straight line or plane.
In linear algebra , this concept is extended to higher-dimensional vector spaces using real or complex numbers and more general terms of angles and distance . An orthogonal projection is then the projection of a vector onto a sub-vector space so that the difference vector from the image and the output vector lies in its orthogonal complement . In functional analysis , the term is understood even further in infinite-dimensional scalar product spaces and is particularly applied to functions . The projection set then ensures the existence and uniqueness of such orthogonal projections .
Orthogonal projections have a wide range of applications within mathematics, for example in representational geometry , the Gram-Schmidt orthogonalization method , the method of least squares , the method of conjugate gradients , Fourier analysis or best approximation . They have applications in cartography , architecture , computer graphics and physics, among others .
Descriptive geometry
In descriptive geometry and technical drawing , projections are used to produce two-dimensional images of three-dimensional geometric bodies . In addition to the central projection , parallel projections are often used here. A parallel projection is an image that maps points in three-dimensional space onto points of a given image plane , the projection rays being parallel to one another . If the projection rays hit the projection plane at a right angle , one speaks of an orthogonal projection.
If, instead of one image plane, three projection planes are used that are perpendicular to one another, then it is a three-panel projection or normal projection . Usually the projection planes are parallel to the axes of the (Cartesian) coordinate system used . If a point in space then has the coordinates , the orthogonal projections of the point on the three coordinate planes are obtained
- (Projection onto the xy plane)
- (Projection onto the xz plane)
- (Projection onto the yz plane)
If a projection plane runs parallel to two of the coordinate axes, but not through the zero point of the coordinate system, the projected point is obtained by replacing the value with the intersection of the plane with the third coordinate axis. In the case of orthogonal axonometry , for example isometry or dimetry , the object to be imaged is rotated in a specific way before projection.
Analytical geometry
The analytical geometry deals with the calculation and the mathematical properties of orthogonal projections in two- and three-dimensional space, especially for the case that the projection plane is not parallel to the coordinate axes.
Projection on a straight line
definition
In the Euclidean plane , an orthogonal projection is the mapping of a point onto a straight line in such a way that the connecting line between the point and its image forms a right angle with the straight line. An orthogonal projection must therefore meet the two conditions
- (Projection)
- (Orthogonality)
fulfill. The line is called the perpendicular of the point on the straight line and the projected point is called the base of the perpendicular. The graphic construction of the plumb bob with compass and ruler is a standard task of Euclidean geometry and one speaks of felling the plumb bob .
Derivation
In analytical geometry, points in the Cartesian coordinate system are described by position vectors and straight lines are typically described as a straight line equation in parametric form , the position vector of a straight line point being the direction vector of the straight line and a real parameter . Two vectors and form a right angle if their is a scalar product . The orthogonal projection onto the straight line must meet the two conditions
for a meet.
If the first equation is inserted into the second, one obtains
- ,
what after resolved
results.
Extending the straight line as the origin line through the zero point , then applies and the formula simplifies to
- .
Is also the direction vector of the straight line a unit vector , that is true , the result is the simpler representation
- .
The factor then indicates how far the projected point on the straight line is from the zero point. Similarly, a point in Euclidean space can also be projected orthogonally onto a straight line in space; only three components are used instead of two.
Examples
The orthogonal projection of the point with on the straight line through the origin with direction in the Euclidean plane is
- .
The orthogonal projection of the point with on the straight line through the origin with direction in Euclidean space is
- .
properties
If the point to be projected is already on the straight line, then there is a number with and the orthogonal projection
does not change the point. Otherwise , the orthogonal projection minimizes the distance between the starting point and all straight line points, since this distance is for the square according to the Pythagorean theorem
applies to all numbers . The minimum is unambiguously assumed at the orthogonally projected point, since the first term of the sum is exactly zero. If the vectors and form a right angle, the projected point is the zero point.
calculation
The orthogonal projection of a point onto a straight line that is not a straight line through the origin is through
- .
given, which can be determined by inserting the general straight line equation into the orthogonality condition and by solving for the free parameter . By which one gets the above special cases from the general case, support vector of the line at the zero point shift is and its direction vector is normalized , is therefore by its amount is shared. In the example in the figure above , as well as and thus .
Alternatively, in the two-dimensional case, an orthogonal projection can also be calculated by determining the point of intersection of the starting line with the perpendicular line. If the starting line is a normal vector , it follows from the two conditions
by substituting the first equation into the second equation and solving for the free parameter for the orthogonal projection
- .
A normal vector can be determined by interchanging the two components of the direction vector of the straight line and by reversing the sign of one of the two components. In the example above there is one . Since a straight line in three-dimensional space does not have an excellent normal direction, this simple approach is only possible in two dimensions.
Projection onto a plane
definition
In three-dimensional space, a point can also be projected orthogonally onto a plane . An orthogonal projection must then meet the two conditions
- (Projection)
- (Orthogonality)
fulfill. Here, too, one speaks of plumb and plumb point. The orthogonality implies that the perpendicular is perpendicular to all straight lines of the plane through the perpendicular base .
Derivation
A point in Euclidean space is again given by a position vector
Due to the linearity of the scalar product, it is sufficient to prove orthogonality with respect to the two span vectors instead of with respect to all vectors of the plane.
If the plane is an origin plane , that is , the orthogonal projection of the point onto the plane must meet the following three conditions:
Substituting the first equation into the other two equations yields with
a system of linear equations with two equations and the two unknowns and . If the span vectors are orthogonal to one another , that is to say , this system of equations breaks down into two independent equations and its solution can be given directly. The orthogonal projection of the point on the plane is then given by:
If the span vectors are even orthonormal , that is, if the following applies in addition , then one has the simpler representation
This gives the orthogonal projection of a point on a plane that is, by determining the orthogonal and of the point on the two straight lines of the chip vectors formed and and by summing the results (see Figure).
example
The orthogonal projection of the point to the original plane defined by the orthogonal vectors , and is spanned is,
- .
properties
If the point to be projected is already on the plane, there are numbers and with and the orthogonal projection
does not change the point. Otherwise, the orthogonally projected point minimizes the distance between the starting point and all points of the plane, since this distance is for the square with the Pythagorean theorem
applies to all numbers . The minimum is clearly assumed for and at the orthogonally projected point. Forms a right angle both with and with , then the projected point is the zero point.
calculation
If a plane does not pass through the origin, it can be moved into the origin by translation . If their span vectors are not orthogonal, they can be orthogonalized using the Gram-Schmidt orthogonalization method. For this purpose, one determines (for example) a vector that is too orthogonal as a connection vector from to the orthogonal projection from onto the straight line in the direction
and thus is given the general case of an orthogonal projection of a point onto a plane by
- .
Alternatively, an orthogonal projection can also be calculated by calculating the intersection of the perpendicular line with the plane. A normal vector of the plane can, if it is not given in normal form , be calculated via the cross product of the (not necessarily orthogonal, but non-collinear) span vectors through . As in the two-dimensional case, one then obtains as an orthogonal projection
- .
Linear Algebra
In linear algebra , the concept of orthogonal projection is generalized to general vector spaces with finite dimensions over the field of real or complex numbers, as well as general scalar products and thus orthogonality terms . By definition, two vectors are orthogonal if and only if their is a scalar product .
Algebraic representation
definition
An orthogonal projection onto a subspace of a vector space is a linear mapping that has the two properties for all vectors
- (Projection)
- for all (orthogonality)
Fulfills. The difference vector is thus in the orthogonal complement of . The orthogonal complement is itself a subspace consisting of those vectors in which are orthogonal to all vectors in .
presentation
If there is a basis of the subspace with the dimension , then each vector has a unique representation as a linear combination . Due to the sesquilinearity of the scalar product, it is therefore sufficient to prove orthogonality only with respect to the basis vectors instead of with respect to all vectors of the subspace. An orthogonal projection must therefore meet the conditions
- For
fulfill. If you put the first equation in the other equations, you get with
- For
a linear system of equations with equations and the unknowns . The Gram's matrix on which it is based is regular due to the linear independence of the basis vectors and this system of equations can therefore be uniquely solved. If now is an orthogonal basis of , that is, for , then the associated Gram's matrix is a diagonal matrix and the system of equations has a directly specifiable solution. The orthogonal projection of the vector onto the sub-vector space is then through
given. If it even forms an orthonormal basis , i.e. with the Kronecker delta , then the orthogonal projection has the simpler representation
- .
Examples
Is chosen as the vector space the standard space and as a scalar product of the standard scalar , then a subspace a linear manifold (such as a line, plane or hyper-plane ) through the zero point, and the orthogonal projections of the previous geometry portion just correspond to special cases
- Projection onto a straight line through the origin in the plane:
- Projection onto a straight line through the origin in space:
- Projection onto an original plane in space:
The case corresponds to the mapping of a vector to the zero point in every dimension and the case always leaves the vector unchanged, since an orthogonal projection is then the identical mapping .
properties
An orthogonal projection is a projection , that is, an idempotent linear mapping of the vector space in itself (called endomorphism ). If the vector to be projected is already an element of the subspace, then there are scalars such that is, and the orthogonal projection
does not change the vector, from which the idempotency follows. The linearity of the figure follows directly from the sesquilinearity of the scalar product. In addition, self-adjointness applies
for all vectors . The orthogonally projected vector minimizes the distance between the output vector and all vectors of the sub-vector space with respect to the norm derived from the scalar product , because it applies with the Pythagorean theorem for scalar product spaces
- .
for everyone . The minimum is unambiguously assumed on the orthogonally projected vector. If the vector lies in the orthogonal complement of the subspace, then the projected vector is the zero vector .
General case
Is the basis of the subspace not orthogonal, it can with the Gram-Schmidt orthogonalization orthogonalized and so an orthogonal of are obtained. Furthermore, a vector can also be projected onto an affine subspace with orthogonal. The general case of an orthogonal projection of a vector onto an affine subspace is then obtained
- .
Complementary representation
If there is an orthogonal complementary basis of , that is, an orthogonal basis of the complement , then one obtains due to
the complementary representation of an orthogonal projection onto an affine subspace as
- .
Matrix display
Coordinates
If one chooses an orthonormal basis with respect to the scalar product for the vector space , then every vector can be used as a coordinate vector over
- With
being represented. The coordinates are exactly the lengths of the orthogonal projections of the vector onto the basis vectors. In coordinate representation, the scalar product of two vectors is then the standard scalar product of the associated coordinate vectors , the adjoint vector (in the real case the transposed vector) being zu .
presentation
If the coordinate vectors of an orthogonal base are now a sub-vector space and the coordinate vector of a vector to be projected , then the coordinate representation is an orthogonal projection
- .
In coordinate representation, an orthogonal projection is thus simply given by a matrix-vector product with the mapping matrix
- .
If the coordinate vectors are an orthonormal basis of , the orthogonal projection matrix has the simpler representation
- .
Each summand is the dyadic product of a coordinate vector with itself.
Examples
In the coordinate space the orthogonal projection matrix on the straight line through the origin with direction is given by
- .
The orthogonal projection matrix on the original plane, which is spanned by and , is corresponding
- .
properties
An orthogonal projection matrix is idempotent , that is, it holds
- .
Furthermore, it is self-adjoint ( symmetrical in the real case ), there
is. The following applies to the rank and the trace of an orthogonal projection matrix
- ,
because rank and trace match for idempotent matrices and the individual matrices each have rank one. The eigenvalues of an orthogonal projection matrix are and , where the associated eigenspaces are precisely the sub-vector space and its orthogonal complement . The spectral norm of an orthogonal projection matrix is therefore equal to one , unless the zero vector space is.
General case
If the coordinate vectors form a basis, but not an orthogonal basis of the sub-vector space, then they can be orthogonalized to calculate an orthogonal projection or a corresponding linear system of equations can be solved. If the basis vectors are combined in columns to form a matrix , then this system of equations has the form of the normal equations
with the coefficient vector . The matrix representation of an orthogonal projection is then given by
- .
This matrix is widely used in statistics (see projection matrix (statistics) ). An orthogonal projection onto an affine subspace is then the affine mapping in a matrix representation
with the identity matrix and with as the coordinate vector of . Using homogeneous coordinates , each orthogonal projection can also be represented as a simple matrix-vector product.
Complementary representation
An orthogonal projection onto an affine subspace has the complementary matrix representation
with the orthogonal projection matrix on the complementary space given by
- .
If the coordinate vectors form an orthogonal basis of the complementary space , the complementary orthogonal projection matrix has the representation
- .
Functional analysis
In functional analysis , the concept of orthogonal projection is generalized to infinite-dimensional scalar product spaces over real or complex numbers and is particularly applied to function spaces .
definition
If is a scalar product space and is a subspace of with orthogonal complement , then an orthogonal projection is an operator (also called orthogonal projector ) with both properties
- (Projection)
- (Orthogonality)
where is the image and the core of the operator. The complementary operator then has both an image and a core .
Existence and uniqueness
In order for such orthogonal projections to exist and be unambiguous, however, the spaces under consideration must be restricted. If a Hilbert space , i.e. a complete scalar product space , and is a closed sub-vector space of , then the projection theorem ensures the existence and uniqueness of orthogonal projections. For each vector there are then unique vectors and , so that this vector is the representation
owns. Thus, and form an orthogonal decomposition of , that is, the entire space can be represented as an orthogonal sum . A finite-dimensional sub-vector space is always closed and the completeness of can then also be dispensed with.
presentation
Every Hilbert space has an orthonormal basis , which, however, cannot always be stated explicitly. If, however, a separable Hilbert space, then such an orthonormal basis can be counted as a shudder basis , so that each vector is in a series
can be developed. Such an orthonormal basis can always be obtained from a linearly independent subset of with the aid of the Gram-Schmidt orthogonalization method. If there is now a (also countable) orthonormal basis of a closed sub-vector space , then an orthogonal projection has the series representation
- .
This representation can also be generalized to non-separable, i.e. uncountable-dimensional Hilbert spaces. If a closed sub-vector space of a general Hilbert space and an orthonormal basis of this sub-vector space with an arbitrary index set , then an orthogonal projection has the corresponding representation
- ,
where only a countable number of sum terms of this sum are not equal to zero. These series are unconditionally convergent according to Bessel's inequality and according to Parseval's equation each element is actually mapped onto itself.
example
The space L ^{2 of} the quadratically integrable real functions in the interval with the L ^{2} scalar product is given ^{}
For this space, the Legendre polynomials form a complete orthogonal system . We are now looking for the orthogonal projection of the exponential function onto the subspace of the linear functions . For this subspace, the two monomials form an orthogonal basis, which after normalization is the orthonormal basis
results. The orthogonal projection of onto the subspace of the linear functions is then given by
- .
properties
If a Hilbert space and a closed subspace of , then is a continuous linear operator with the following properties:
- is a projection , that is .
- is self-adjoint , i.e. with the adjoint operator .
- is normal , that is .
- is positive , that is, especially for everyone .
- is a partial isometric drawing where the isometric part is the identity.
- is compact if and only if is finitely dimensional.
- is best approximation in the scalar product norm , that is .
- , if , and , if (in the operator norm ).
Conversely, a continuous linear projection that is self-adjoint or normal or positive or normalized to one is an orthogonal projection onto the image space .
Applications
Orthogonal projections have a wide variety of applications, only a few of which are highlighted here:
- geometry
- in analytical geometry for distance calculations and reflections on planes
- in cartography for orthogonal floor plan projections and orthophotos
- in physics, when forces are broken down into their component parts
- in computer graphics when calculating reflections and shadows
- Linear Algebra
- in the construction of orthonormal bases with the Gram-Schmidt orthogonalization method
- in solving linear least squares problems with the method of least squares
- in the iterative solution of linear systems of equations with the conjugate gradient method and general Krylow subspace method
- Functional analysis
- in approximation theory for the best approximation of functions
- in Fourier analysis and wavelet analysis of signals
- in Sobolev's theory for solving partial differential equations
- in quantum mechanics to describe quantum mechanical states using the Bra-Ket notation
- Statistics and Probability Theory
- in the principal component analysis of multivariate data sets
- with conditional expected values
See also
literature
- Hans Wilhelm Alt: Linear Functional Analysis: An Application-Oriented Introduction . 5th edition. Springer, 2008, ISBN 3-540-34186-2 .
- Albrecht Beutelspacher : Linear Algebra. An introduction to the science of vectors, maps, and matrices . 7th edition. Vieweg, 2009, ISBN 3-528-66508-4 .
- Gerd Fischer : Linear Algebra: An Introduction for New Students . 17th edition. Vieweg, 2009, ISBN 3-8348-0996-9 .
- Dirk Werner : Functional Analysis . 5th edition. Springer, 2005, ISBN 3-540-21381-3 .
References and comments
- ↑ In the following, the variant of the complex scalar product is used which is linear in the first and semilinear in the second argument.
- ↑ An equivalent condition is for all .
- ↑ ^{a } ^{b} Dirk Werner: functional analysis . 2005, p. 220-221 .
- ↑ Dirk Werner: Functional Analysis . 2005, p. 231 .
- ↑ Dirk Werner: Functional Analysis . 2005, p. 229-230 .
- ↑ ^{a } ^{b} Dirk Werner: functional analysis . 2005, p. 236 .
Web links
- VI Sobolev: Orthogonal projector . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
- Eric W. Weisstein : Orthogonal Projection . In: MathWorld (English).