In the mathematical field of linear algebra, a diagonalizable matrix is a square matrix that is similar to a diagonal matrix . It can be transformed into a diagonal matrix by means of a base change ( i.e. conjugation with a regular matrix ). The concept can be transferred to endomorphisms .
definition
Diagonal matrix
A square matrix over a body , the elements of which are all zero, is called a diagonal matrix. Often you write for it
![{\ displaystyle D = (d_ {ij})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea563697fdb46ecb94ae3ff3432ec3091c1c8709)
![K](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b76fce82a62ed5461908f0dc8f037de4e3686b0)
![{\ displaystyle d_ {ij} \ in K}](https://wikimedia.org/api/rest_v1/media/math/render/svg/052701c7e17f3ad7769775880d0e4f6596a4b01a)
![i \ neq j](https://wikimedia.org/api/rest_v1/media/math/render/svg/d95aeb406bb427ac96806bc00c30c91d31b858be)
-
.
Diagonalizable matrix
A square -dimensional matrix is called diagonalizable or diagonal-like if there is a diagonal matrix to which it is similar , that is, there is a regular matrix such that or .
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![S.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2)
![D_ {A} = S ^ {- 1} AS](https://wikimedia.org/api/rest_v1/media/math/render/svg/29488cd6eaa1bbdb9755fca5435a6f7f4653df15)
![SD_ {A} = AS](https://wikimedia.org/api/rest_v1/media/math/render/svg/e14ceb410024edb0e0b15d3e1a1e48af6b69dc21)
An endomorphism over a finite-dimensional vector space is called diagonalizable if there is a basis of with respect to which the mapping matrix is a diagonal matrix.
![B.](https://wikimedia.org/api/rest_v1/media/math/render/svg/47136aad860d145f75f3eed3022df827cee94d7a)
![{\ displaystyle M_ {B} (f)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/be5dbe58db4fe9a6db71a82ba5012101a5d1f11a)
Unitary diagonalizable matrix
A matrix is unitary diagonalizable if and only if a unitary transformation matrix exists such that is a diagonal matrix, where is the matrix to be adjoint .
![A \ in {\ mathbb {C}} ^ {{n \ times n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83b3023de9ab6f59e511d7c8ae72d03b64bcecbc)
![{\ displaystyle S \ in \ mathbb {C} ^ {n \ times n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e253e5c48624be8d033bb62916a5922647537ab9)
![{\ displaystyle S ^ {H} AS = S ^ {- 1} AS}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4324da3ac519acf4e0992c1db722dc39ee0de28d)
![{\ displaystyle S ^ {H}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f97964724b6370bfcbd38bd13767e14c8f2c52e8)
From this follows for a real-valued matrix the unitary diagonalizability, if an orthogonal transformation matrix exists such that is a diagonal matrix, where is the matrix to be transposed .
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![{\ displaystyle S \ in \ mathbb {R} ^ {n \ times n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6adbe2deba338fe8b5da6b2fd48fd3e6cd24dc2b)
![{\ displaystyle S ^ {T} AS = S ^ {- 1} AS}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c77f0524d3fb28b5fdbbf218c5d36854641653b2)
![S ^ {T}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c96ff2287e146f1ccab6ae915d9bba2643f7fb2)
In a finite-dimensional Prehilbert space , an endomorphism can be unitarily diagonalized if and only if there is an orthonormal basis of such that the mapping matrix is a diagonal matrix.
![V](https://wikimedia.org/api/rest_v1/media/math/render/svg/af0f6064540e84211d0ffe4dac72098adfa52845)
![B.](https://wikimedia.org/api/rest_v1/media/math/render/svg/47136aad860d145f75f3eed3022df827cee94d7a)
![V](https://wikimedia.org/api/rest_v1/media/math/render/svg/af0f6064540e84211d0ffe4dac72098adfa52845)
![{\ displaystyle M_ {B} (f)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/be5dbe58db4fe9a6db71a82ba5012101a5d1f11a)
Further characterizations of the diagonalisability
Let be a -dimensional matrix with entries from a body . Each of the following six conditions is met if and only if is diagonalizable.
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![K](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b76fce82a62ed5461908f0dc8f037de4e3686b0)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
- The minimal polynomial completely breaks down into different linear factors in pairs: with
![{\ displaystyle m_ {A} (\ lambda)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a7d5bfbd9b18c088444f0abfed3e8df7ef7c0a44)
![{\ displaystyle m_ {A} (\ lambda) = \ pm (\ lambda - \ lambda _ {1}) \ cdot \ dots \ cdot (\ lambda - \ lambda _ {n})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7df0e724c83f7b1b75853b6834df42a55b647895)
- The characteristic polynomial completely breaks down into linear factors and the geometric multiplicity corresponds to the algebraic multiplicity for each eigenvalue .
![\ chi _ {A} (\ lambda)](https://wikimedia.org/api/rest_v1/media/math/render/svg/ee56eb70fc1b2f610b090caa6d491695276b0b0a)
![\ lambda _ {i} \ in K](https://wikimedia.org/api/rest_v1/media/math/render/svg/5c64ce320b5fa51f921fbb7e7e873aed04d84e2c)
- There is a basis for which consists of eigenvectors for .
![K ^ {n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d63366b3d00300e06eee81786182062b98775c5)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
- The sum of the dimensions of the respective natural spaces is equal to : where the spectrum designated.
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![{\ displaystyle \ sum _ {\ lambda \ in \ sigma (A)} \ dim (E _ {\ lambda} (A)) = n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/383da4d24bdd67a358fb19c8fcb314a89eb6a2a7)
![\ sigma (A)](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a507cac58b93e190d8f9ff7e470e6c28ff772d0)
-
is the direct sum of the respective natural spaces: .![{\ displaystyle K ^ {n} = \ bigoplus _ {\ lambda \ in \ sigma (A)} E _ {\ lambda} (A)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/669f56ba8dcc324c4a692cbf7080f5fa4b7588d3)
- All Jordan blocks of Jordan normal form have dimension 1.
![YES](https://wikimedia.org/api/rest_v1/media/math/render/svg/daf9814034acbaa033f4db4ec3620cb5bb1f969a)
If and with the desired properties are found, then it holds that the diagonal entries of , namely , are eigenvalues of too certain unit vectors . Then is . The so are eigenvectors of , in each case to the eigenvalue .
![S.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2)
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![\ lambda _ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/72fde940918edf84caf3d406cc7d31949166820f)
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![egg}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ebdc3a9cb1583d3204eff8918b558c293e0d2cf3)
![ASe_ {i} = SD_ {A} e_ {i} = S \ lambda _ {i} e_ {i} = \ lambda _ {i} Se_ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/298b111b595fa5d2caa30e8e58dea0110eb2a32d)
![Be}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd6a896c0035e62d3e797d83a9ba1fbf83ee80f7)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![\ lambda _ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/72fde940918edf84caf3d406cc7d31949166820f)
Since it should be invertible, it is also linearly independent.
![S.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2)
![(Se_ {1}, \ ldots, Se_ {n})](https://wikimedia.org/api/rest_v1/media/math/render/svg/f17aa61ed8bf31e74d92a98839a173a4b8a7d525)
In summary, this results in the necessary condition that a -dimensional diagonalizable matrix must have linearly independent eigenvectors. The space on which it operates has a basis of eigenvectors of the matrix. However, this condition is also sufficient, because from found linearly independent eigenvectors of with the associated eigenvalues suitable and very direct can be constructed.
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![S.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2)
The problem is reduced to finding linearly independent eigenvectors of .
![n](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
A necessary but not sufficient condition for diagonalisability is that the characteristic polynomial completely breaks down into linear factors: So it is not diagonalisable, although . A sufficient, but not necessary, condition for diagonalisability is that it completely breaks down into pairwise different linear factors: So is diagonalisable, although .
![\ chi _ {A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/49baf1caaa804f2d77bfc7570d102ee4a3cafa26)
![{\ displaystyle A = {\ begin {pmatrix} 0 & 1 \\ 0 & 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ff57095860713106e2e9aad71c1fd8409aeabebd)
![{\ displaystyle \ chi _ {A} (X) = X ^ {2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/546d05099ddf7775304601f61c9e973d136e4720)
![\ chi _ {A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/49baf1caaa804f2d77bfc7570d102ee4a3cafa26)
![{\ displaystyle A = {\ begin {pmatrix} 1 & 0 \\ 0 & 1 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3d70afbc7a5cc91c4e79b55e6880b63269ba926)
![{\ displaystyle \ chi _ {A} (X) = (X-1) ^ {2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/622b944be6baaadb6f64c8921397570b4c4feebd)
Properties of a diagonalizable matrix
If a matrix is diagonalizable, the geometric multiplicity of its eigenvalues is equal to the respective algebraic multiplicity . This means that the dimension of the individual eigenspaces corresponds to the algebraic multiplicity of the corresponding eigenvalues in the characteristic polynomial of the matrix.
The matrix power of a diagonalizable matrix can be calculated by
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![{\ displaystyle A ^ {n} = S \ cdot D_ {A} ^ {n} \ cdot S ^ {- 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/19b2e7879a9461ef587320f5eb7bf56303e3a12a)
The power of a diagonal matrix is obtained by raising the diagonal elements to the power.
Diagonalization
If a matrix can be diagonalized, there is a diagonal matrix for which the similarity condition is fulfilled:
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![D_ {A} = S ^ {- 1} AS](https://wikimedia.org/api/rest_v1/media/math/render/svg/29488cd6eaa1bbdb9755fca5435a6f7f4653df15)
To diagonalize this matrix, one calculates the diagonal matrix and a corresponding basis from eigenvectors. This is done in three steps:
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
- The eigenvalues of the matrix are determined. (Individual eigenvalues can occur more than once.)
![\ lambda _ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/72fde940918edf84caf3d406cc7d31949166820f)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
- The eigenspaces for all eigenvalues are calculated, i.e. systems of equations of the following form are solved
![E \ left (\ lambda _ {i} \ right)](https://wikimedia.org/api/rest_v1/media/math/render/svg/e84cbe1d33eb741bf5991f5438528bc33ff783c9)
![\ lambda _ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/72fde940918edf84caf3d406cc7d31949166820f)
-
.
- Because the geometric multiplicity is equal to the algebraic multiplicity of any eigenvalue, we can find a basis of for any maximal set of corresponding eigenvalues .
![{\ displaystyle \ lambda _ {i_ {1}} = \ ldots = \ lambda _ {i_ {k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e71e116b3f5d4b968b4556d22ac7db09028a5412)
![{\ displaystyle \ left \ {b_ {i_ {1}}, \ ldots, b_ {i_ {k}} \ right \}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2a42e0af4bf9c7d254dba4a063e570d57d24f89a)
![{\ displaystyle E \ left (\ lambda _ {i_ {1}} \ right) = \ ldots = E (\ lambda _ {i_ {k}})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3ff6c618c8a75496975cbb9143a28f8269404af5)
- Now the diagonal form of the matrix with respect to the base is :
![THERE}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b78af4a9d2d8aaec85ea4e6b0f997a31def04311)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![{\ displaystyle B = \ left \ {b_ {1}, \ ldots, b_ {n} \ right \}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f40d7201c42f3284faa589a7bcf2ab74d901d98c)
![D_ {A} = \ operatorname {diag} (\ lambda _ {1}, \ lambda _ {2}, \ dots, \ lambda _ {n})](https://wikimedia.org/api/rest_v1/media/math/render/svg/13ad2b2d842138c36062bbcfd8d0b0b2a4051508)
![{\ displaystyle S = \ left (b_ {1}, \ ldots, b_ {n} \ right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4eb30a69965b7887d148ec94f04c08477f448fbf)
Simultaneous diagonalization
Occasionally you want to diagonalize two matrices with the same transformation . If that succeeds, then
and and there and are diagonal matrices,
![FROM](https://wikimedia.org/api/rest_v1/media/math/render/svg/96c3298ea9aa77c226be56a7d8515baaa517b90b)
![S.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2)
![S ^ {- 1} AS = D_ {1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/033ea82fff9f6a1e5caf194266fbf4c0b5a11c9c)
![S ^ {- 1} BS = D_ {2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a6742ff0fffb37e3613663187663b4712f634777)
![D_ {1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e7714158afb63a98e7dbc6e885f70d9141c6923a)
![D_ {2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/41b3839c40bd06e3dfea10798dfab41a905af256)
-
.
So the endomorphisms have to commute with one another. In fact, the reverse also applies: if two diagonalizable endomorphisms commutate, they can be diagonalized simultaneously. In quantum mechanics there is a basis of common eigenstates for two such operators.
example
Let be the matrix to be diagonalized. is (unitary) diagonalizable, since is symmetrical , d. H. it applies .
![{\ displaystyle A = {\ begin {pmatrix} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 1 & 0 & 1 \ end {pmatrix}} \ in \ mathbb {R} ^ {3 \ times 3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8838bffeb8a2d7e6564d30e97b0960b69a0f4773)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![A = A ^ {T}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56e9814bd7bfa7b64baaa8f9406c5dbda65a2455)
The eigenvalues of can be determined by the zeros of the characteristic polynomial :
![\ lambda _ {i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/72fde940918edf84caf3d406cc7d31949166820f)
![\ chi _ {A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/49baf1caaa804f2d77bfc7570d102ee4a3cafa26)
![{\ displaystyle \ chi _ {A} (\ lambda) = \ det (\ lambda E_ {3} -A) = \ det {\ begin {pmatrix} \ lambda -1 & 0 & -1 \\ 0 & \ lambda -2 & 0 \\ -1 & 0 & \ lambda -1 \ end {pmatrix}} = \ lambda (\ lambda -2) ^ {2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/71bd7bab94c9e086b26c0d0524cf565cca345c0a)
So . The eigenvalue 2 has algebraic multiplicity because it is the double zero of the characteristic polynomial.
![{\ displaystyle a _ {\ lambda _ {2}} = 2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc5c63fbc2805d6e1ad35e6ca6da22d580caa71d)
To determine the eigenspaces, insert the eigenvalues in .
![{\ displaystyle E (\ lambda) = \ mathrm {core} (\ lambda E_ {3} -A)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/21b89ec70237c7651755babce6e8bca1b34c7ac0)
All with get, we summarize the extended coefficient matrix as a system of linear equations on with infinite solutions.
![{\ displaystyle v \ in \ mathbb {R} ^ {3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb67eb863a2a7bd76caf55d9a201d5dd8cac9e13)
![{\ displaystyle (\ lambda E_ {3} -A) v = 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/979d1fbd8ce3b23d6b16b70c35b1f3d96c3c7187)
![{\ displaystyle {\ begin {pmatrix} \ lambda E_ {3} -A \ mid 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/248a53886bfeba52b25629844d107fa5574785d6)
For we get , with the Gaussian elimination method we get and thus as a solution set the eigenspace:
![\ lambda = 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/00c4bba30544017fe76932de5a4e25adb5512d95)
![{\ displaystyle {\ begin {pmatrix} -A \ mid 0 \ end {pmatrix}} = {\ begin {pmatrix} {\ begin {array} {ccc | c} -1 & 0 & -1 & 0 \\ 0 & -2 & 0 & 0 \\ - 1 & 0 & -1 & 0 \ end {array}} \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4baf1f7e2dd26ee480f09fa7534ae8e2cd8f2540)
![{\ displaystyle {\ begin {pmatrix} {\ begin {array} {ccc | c} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \ end {array}} \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/715f71e99028469aa022c609670de54a2e970d38)
-
,
where denotes the linear envelope .
![{\ displaystyle \ operatorname {Lin}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/668c29329c47fc309e931257749a69d6915dd85e)
For we get , from this
and thus as a solution set, the eigenspace:
![\ lambda = 2](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b05534f7ee0cfcbac25615feeb699abe09569ae)
![{\ displaystyle {\ begin {pmatrix} 2 \ cdot E_ {3} -A \ mid 0 \ end {pmatrix}} = {\ begin {pmatrix} {\ begin {array} {ccc | c} 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ - 1 & 0 & 1 & 0 \ end {array}} \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e2039ddf25075e22f5b6f95575e04efcea232724)
![{\ displaystyle {\ begin {pmatrix} {\ begin {array} {ccc | c} 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \ end {array}} \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ff1734c3c9de32d9c9d0f3768e2caa6e587800)
-
.
We get the eigenvectors from the bases of the eigenspaces, they form a basis of .
![{\ displaystyle v_ {1} = {\ begin {pmatrix} 1 \\ 0 \\ - 1 \ end {pmatrix}}, v_ {2} = {\ begin {pmatrix} 1 \\ 0 \\ 1 \ end { pmatrix}}, v_ {3} = {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f55389e5d006f01408184cab9b37498bf6d29bbf)
![\ mathbb {R} ^ {3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f936ddf584f8f3dd2a0ed08917001b7a404c10b5)
If we normalize with and we get an orthonormal basis , since symmetric and the eigenvectors of the semi-simple eigenvalues are orthogonal to each other (in this case ).
![{\ displaystyle b_ {1} = {\ frac {1} {\ sqrt {2}}} {\ begin {pmatrix} 1 \\ 0 \\ - 1 \ end {pmatrix}}, b_ {2} = {\ frac {1} {\ sqrt {2}}} {\ begin {pmatrix} 1 \\ 0 \\ 1 \ end {pmatrix}}, b_ {3} = {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0c84495cf5a8b75e3497f4506a08cd3be537ccd4)
![{\ displaystyle {\ mathcal {B}} = \ left \ {b_ {1}, b_ {2}, b_ {3} \ right \}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cb7aa3a21207da5e55591f0e6e72cf70ebaa74d4)
![{\ displaystyle v_ {2} \ perp v_ {3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d56ae9dbb0ed51839d54fda6eb77f5c5c2ddf913)
So it applies . From this we get the inverse using the properties of orthonormal bases .
![{\ displaystyle S ^ {- 1} = S ^ {T} = {\ frac {1} {\ sqrt {2}}} {\ begin {pmatrix} 1 & 0 & -1 \\ 1 & 0 & 1 \\ 0 & {\ sqrt {2 }} & 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d1e3599365f2c7b6b76a0b38dd3a695e245aae38)
determined by .
![{\ displaystyle D_ {A} = \ operatorname {diag} (\ lambda _ {1}, \ lambda _ {2}, \ lambda _ {3}) = \ operatorname {diag} (0,2,2) = { \ begin {pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d0c1f47dd23ee459de19c489a47f480f6c35486f)
So we get for
![{\ displaystyle {\ begin {pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \ end {pmatrix}} = {\ frac {1} {2}} {\ begin {pmatrix} 1 & 0 & -1 \\ 1 & 0 & 1 \\ 0 & {\ sqrt {2}} & 0 \ end {pmatrix}} {\ begin {pmatrix} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 1 & 0 & 1 \ end {pmatrix}} {\ begin {pmatrix} 1 & 1 & 0 \\ 0 & 0 & {\ sqrt {2}} \\ -1 & 1 & 0 \ end {pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65f023548f4326371c3c5a2a33e03cb943e55edc)
and thus the diagonalization
-
.
See also
Individual evidence
-
↑ Uwe Storch , Hartmut Wiebe: Textbook of Mathematics, Volume 2: Lineare Algebra. BI-Wissenschafts-Verlag, Mannheim et al. 1990, ISBN 3-411-14101-8 .