# Orthogonality The two lines and are orthogonal because they form a right angle with each other.${\ displaystyle [AB]}$ ${\ displaystyle [CD]}$ The term orthogonality is used in mathematics with different but related meanings.

In elementary geometry , two straight lines or planes are called orthogonal (or perpendicular ) if they enclose a right angle , i.e. an angle of 90 °.

In linear algebra , the term is extended to more general vector spaces : two vectors are called orthogonal to one another if their scalar product is zero.

This meaning is also transferred to mappings between vector spaces that leave the scalar product and thus the orthogonality of two vectors unchanged.

## Designations

The term orthogonal ( Greek ὀρθός orthos "right, right" and γωνία gonia "corner, angle") means "right-angled". Equivalent to right-angled is also normal ( Latin norma "measure", meaning the right angle). The word "normal" is also used in mathematics with other meanings. Perpendicular comes from the plumb line and originally only means orthogonal to the earth's surface ( perpendicular ). This fact is also supported by vertical (lat. Vertex expressed "vertices").

Two straight lines, planes or vectors and , which are orthogonal or not orthogonal to one another, are designated with ${\ displaystyle a}$ ${\ displaystyle b}$ ${\ displaystyle a \ perp b}$ or   .${\ displaystyle a \ not \ perp b}$ Based on the English term perpendicular , the orthogonality symbol is coded in HTML with &perp;and in LaTeX (within the mathematics environment) \perp. In the Unicode character encoding standard, the symbol ⊥ has the position . U+22A5

## Orthogonality in Geometry

### Elementary geometry

In elementary geometry, two straight lines or planes are called orthogonal if they form a right angle , i.e. H. enclose an angle of 90 °. The following terms are used:

• A straight line is called orthogonal (normal) to a plane if its direction vector is a normal vector of the plane.
• A plane is called an orthogonal (normal plane) of a plane if its normal vector lies in this plane.
• A straight line / plane is called an orthogonal (normal) to a curve if it is orthogonal to the tangent / tangential plane at the point of intersection.

In an orthogonal polygon (for example a rectangle ), two adjacent sides each form a right angle, with an orthogonal polyhedron (for example a cuboid ) each two adjacent edges and thus also adjacent side surfaces.

### Analytical geometry

#### Vectors

The angle of two vectors and in the Cartesian coordinate system can be determined using the scalar product${\ displaystyle {\ vec {v}}}$ ${\ displaystyle {\ vec {w}}}$ ${\ displaystyle {\ vec {v}} \ cdot {\ vec {w}} = | {\ vec {v}} | \, | {\ vec {w}} | \, \ cos \ sphericalangle ({\ vec {v}}, {\ vec {w}})}$ to calculate. And denote the lengths of the vectors and the cosine of the angle enclosed by the two vectors. If two vectors and a right angle form, then applies ${\ displaystyle | {\ vec {v}} |}$ ${\ displaystyle | {\ vec {w}} |}$ ${\ displaystyle \ cos \ sphericalangle ({\ vec {v}}, {\ vec {w}})}$ ${\ displaystyle {\ vec {v}}}$ ${\ displaystyle {\ vec {w}}}$ ${\ displaystyle {\ vec {v}} \ cdot {\ vec {w}} = | {\ vec {v}} | \, | {\ vec {w}} | \, \ cos 90 ^ {\ circ} = 0}$ .

Two vectors are thus orthogonal to one another if their scalar product is zero. The zero vector is orthogonal to all vectors.

A set of vectors is said to be pairwise orthogonal if it holds for all that and are orthogonal to each other. ${\ displaystyle \ {{\ vec {v}} _ {j}: j = 1, \ dots, N \}}$ ${\ displaystyle i \ not = j}$ ${\ displaystyle {\ vec {v}} _ {i}}$ ${\ displaystyle {\ vec {v}} _ {j}}$ #### Straight lines and planes

Two straight lines in the plane are orthogonal if their direction vectors are orthogonal. No point of intersection is necessary in space or in higher dimensions. Two straight lines can also be orthogonal if they are skewed to one another. A straight line and a plane in space are orthogonal if the direction vector of the straight line is orthogonal to every vector in the plane.

Two planes in Euclidean space are orthogonal if there is a straight line that is contained in one of the two planes and is orthogonal to the second.

Are two straight lines in the Euclidean plane through the equations

${\ displaystyle y = m_ {1} x + b_ {1}}$ and   ${\ displaystyle y = m_ {2} x + b_ {2}}$ given, they are orthogonal if and only if is, or equivalent: if is true, because then and only are with ${\ displaystyle m_ {1} m_ {2} = - 1}$ ${\ displaystyle m_ {1} = - {\ tfrac {1} {m_ {2}}}}$ ${\ displaystyle {\ begin {pmatrix} 1 \\ m_ {1} \ end {pmatrix}} \ cdot {\ begin {pmatrix} 1 \\ m_ {2} \ end {pmatrix}} = 0}$ their direction vectors are orthogonal.

### Synthetic geometry

In synthetic geometry , orthogonality can be introduced through the axiomatic description of an orthogonality relation between straight lines on certain affine incidence levels.

## Orthogonality in linear algebra

### Orthogonal and orthonormal vectors

In linear algebra, in an extension of the term Euclidean space , multidimensional vector spaces are also included over the real or complex numbers for which a scalar product is defined. The scalar product of two vectors and is a mapping that must fulfill certain axioms and is typically written in the form . In general, two vectors and from such a scalar product space are considered to be orthogonal to one another if the scalar product of the two vectors is equal to zero, that is, if ${\ displaystyle v}$ ${\ displaystyle w}$ ${\ displaystyle \ langle v, w \ rangle}$ ${\ displaystyle v}$ ${\ displaystyle w}$ ${\ displaystyle \ langle v, w \ rangle = 0}$ applies. For example, in space the two vectors and are orthogonal to the standard scalar product , there ${\ displaystyle \ mathbb {R} ^ {2}}$ ${\ displaystyle v = (2,1) ^ {T}}$ ${\ displaystyle w = (1, -2) ^ {T}}$ ${\ displaystyle \ langle v, w \ rangle = 2 \ cdot 1 + 1 \ cdot (-2) = 2-2 = 0}$ is. A set of vectors is called an orthogonal or orthogonal system if all vectors contained in it are orthogonal to each other in pairs. If, in addition, all vectors contained therein have the norm one, the set is called orthonormal or an orthonormal system . A set of orthogonal vectors, which are all different from the zero vector, is always linearly independent and therefore forms a basis for the linear envelope of this set. A basis of a vector space of orthonormal vectors is accordingly called an orthonormal basis . The following applies for every two vectors of an orthonormal basis ${\ displaystyle v_ {i}, v_ {j}}$ ${\ displaystyle \ langle v_ {i}, v_ {j} \ rangle = \ delta _ {ij}}$ ,

where the Kronecker delta denotes. Finite-dimensional scalar product spaces and Hilbert spaces always have an orthonormal basis. For finite-dimensional vector spaces and for separable Hilbert spaces, one can find such a space with the help of the Gram-Schmidt orthonormalization method. An example of an orthonormal basis is the standard basis (or canonical basis) of three-dimensional space . ${\ displaystyle \ delta _ {ij}}$ ${\ displaystyle \ {e_ {1}, e_ {2}, e_ {3} \} = \ {(1,0,0), (0,1,0), (0,0,1) \}}$ ${\ displaystyle \ mathbb {R} ^ {3}}$ ### Orthogonal functions

The term vector space can be generalized to the effect that certain function spaces can also be treated as vector spaces, and functions are then viewed as vectors. Two functions and a scalar product space are then called orthogonal to one another if ${\ displaystyle f}$ ${\ displaystyle g}$ ${\ displaystyle \ langle f, g \ rangle = 0}$ applies. For example, the L 2 scalar product for continuous real-valued functions on an interval is through ${\ displaystyle [a, b]}$ ${\ displaystyle \ langle f, g \ rangle = \ int _ {a} ^ {b} f (x) \, g (x) \, dx}$ Are defined. With regard to this scalar product, for example, the two functions and are orthogonal to one another on the interval , because it holds ${\ displaystyle [-1.1]}$ ${\ displaystyle f (x) = x}$ ${\ displaystyle g (x) = x ^ {2}}$ ${\ displaystyle \ langle f, g \ rangle = \ int _ {- 1} ^ {1} x \ cdot x ^ {2} \, dx = \ int _ {- 1} ^ {1} x ^ {3} \, dx = 0}$ .

In this way, orthogonal polynomials and orthogonal bases can be determined in complete dot product spaces, so-called Hilbert spaces . However, many interesting spaces, such as the L 2 spaces , are infinitely dimensional, see Hilbert space basis . In quantum mechanics , the states of a system also form a vector space and accordingly one speaks of orthogonal states there.

### Orthogonal matrices

A square, real matrix is called an orthogonal matrix if it is compatible with the scalar product, that is, if ${\ displaystyle A \ in \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle \ langle Av, Aw \ rangle = \ langle v, w \ rangle}$ holds for all vectors . A matrix is orthogonal if and only if its columns (or rows), understood as vectors, are mutually orthonormal (not just orthogonal). The condition or is equivalent to this . Orthogonal matrices describe rotations and reflections in the plane or in space. The set of all orthogonal matrices of the size forms the orthogonal group . The equivalent for matrices with complex entries is called a unitary matrix . ${\ displaystyle v, w \ in \ mathbb {R} ^ {n}}$ ${\ displaystyle A}$ ${\ displaystyle A ^ {T} A = I}$ ${\ displaystyle A ^ {T} = A ^ {- 1}}$ ${\ displaystyle n \ times n}$ ${\ displaystyle \ mathrm {O} (n)}$ ### Orthogonal mappings

If and are two real scalar product spaces , then a mapping is called orthogonal if ${\ displaystyle V}$ ${\ displaystyle W}$ ${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle \ langle f (v), f (w) \ rangle = \ langle v, w \ rangle}$ holds for all vectors . An orthogonal mapping thus receives the scalar product of two vectors and maps orthogonal vectors onto orthogonal vectors. A mapping between finite dimensional scalar product spaces is orthogonal if and only if its matrix representation is an orthogonal matrix with respect to an orthonormal basis. Furthermore, an orthogonal mapping is an isometry and thus also contains lengths and distances from vectors. ${\ displaystyle v, w \ in V}$ Orthogonal images are not to be confused with images that are orthogonal to one another . These are images that are themselves understood as vectors and whose scalar product is zero. Maps between complex scalar product spaces that contain the scalar product are called unitary maps .

### Orthogonal projections

If a finite-dimensional real or complex vector space with a scalar product, then for every sub-vector space there is the projection along the orthogonal complement of , which is called the orthogonal projection on . It is the uniquely determined linear mapping with the property that for all${\ displaystyle V}$ ${\ displaystyle U}$ ${\ displaystyle U}$ ${\ displaystyle U}$ ${\ displaystyle P \ colon V \ to V}$ ${\ displaystyle v \ in V}$ • ${\ displaystyle P (v) \ in U}$ and
• ${\ displaystyle \ langle P (v), u \ rangle = \ langle v, u \ rangle}$ for all   ${\ displaystyle u \ in U}$ applies. If an infinite-dimensional Hilbert space , then this statement with the projection theorem applies accordingly to closed sub-vector spaces . In this case, continuous can be chosen. ${\ displaystyle V}$ ${\ displaystyle U}$ ${\ displaystyle P}$ ## Orthogonality in standardized spaces

In a scalar product space is equivalent to for all scalars . This motivates the following definition: ${\ displaystyle \ langle v, w \ rangle = 0}$ ${\ displaystyle \ | v \ | \ leq \ | v + \ lambda w \ |}$ ${\ displaystyle \ lambda}$ For from a standardized space be for everyone${\ displaystyle v, w}$ ${\ displaystyle (V, \ | \ cdot \ |)}$ ${\ displaystyle v \ perp w: \ Leftrightarrow \ | v \ | \ leq \ | v + \ lambda w \ |}$ ${\ displaystyle \ lambda}$ This orthogonality concept in normalized spaces is much weaker than in scalar product spaces. In general, orthogonality is neither symmetric nor additive, that is, from does not follow in general and from and does not follow in general . ${\ displaystyle v \ perp w}$ ${\ displaystyle w \ perp v}$ ${\ displaystyle v \ perp w_ {1}}$ ${\ displaystyle v \ perp w_ {2}}$ ${\ displaystyle v \ perp (w_ {1} + w_ {2})}$ This fact leads to further concept formation, because one will be interested in such standardized spaces in which the orthogonality is additive. It turns out that these are exactly the smooth normalized spaces .

## Applications

Orthogonality is used in many applications because it makes calculations easier or more robust. Examples are: