Frobenius normal form

from Wikipedia, the free encyclopedia

The Frobenius normal form (after Ferdinand Georg Frobenius ) or rational normal form of a square matrix with entries in any body is a transformed matrix (with an invertible matrix ) that has a special, clear form. "Clearly" because every matrix can be transformed into exactly one matrix of this form and two matrices can therefore be transformed into each other if they have the same Frobenius normal form. If that's the case, we also say that two matrices are located similar because they have the same linear map with respect to different bases represent . For every linear mapping of a finite-dimensional vector space there is therefore a basis, with respect to which it is represented in Frobenius normal form. There can be several such bases, so the transformation matrix is not clearly determined.

On the one hand, the Frobenius normal form can be understood as an alternative to the Jordanian normal form (which in turn is a generalization of the diagonal form), whereby it no longer has to be assumed that the characteristic polynomial breaks down into linear factors . On the other hand, Frobenius' lemma characterizes matrices that are similar to one another by the elementary divisors of their characteristic matrices and delivers the Frobenius normal form as the normal form of the vector space under the operation of a polynomial ring .

Generalization of the diagonalization

If a matrix can be diagonalized, its characteristic polynomial breaks down into linear factors with eigenvalues . The associated eigenvectors with forming a base of the vector space in which each basis vector by a multiple of itself is mapped.

A non-diagonalizable matrix does not have enough eigenvectors for a basis, or the characteristic polynomial breaks down into irreducible factors that are not all degree 1. In order to determine the Frobenius normal form of , a basis of vectors is then sought, analogously to the last paragraph, which are made zero by certain products of the irreducible factors etc. It turns out that this is possible and one finally obtains a representation in which is a divisor of , a divisor of , etc. The factor includes the basis vectors , the subspace of which is mapped in itself because of and on the basis of these basis vectors by the matrix

is displayed (the entries not specified in this so-called accompanying matrix to the polynomial are 0). The entire vector space is divided into such -invariant subspaces, and can be broken down into the block diagonal matrix

represent. It is the Frobenius normal form of .

A disadvantage is that the Frobenius normal form of a diagonal matrix with eigenvalues ​​1 and 2 does not have a diagonal form, but

is. The Weierstrass normal form provides a remedy here , in which the accompanying matrix in the block diagonal matrix is ​​replaced by the accompanying matrices of the powers of various irreducible factors of , for example by

if with . A matrix is ​​diagonalizable if and only if all these factors are linear and none occurs in the second or higher power; so its Weierstrass normal form is then also a diagonal matrix.

Lemma from Frobenius

The set of all polynomials, these are expressions of the form with coefficients , forms a ring , the so-called polynomial ring . If a matrix is given, a product of polynomial and vector can be defined by , for which the expected associative and distributive laws apply. One speaks of an operation of the polynomial on the vector space, by which the vector space a - module is.

After choosing a basis of , one can specify a -module isomorphism . Its domain is the factor module of modulo , whereby the expression in angle brackets (in a notation chosen ad hoc) denotes the product of the columns of the characteristic matrix . This isomorphism carries over the operation of the polynomial ring, i. h., for , and is defined by

The characteristic matrix with entries in the polynomial ring can be converted into a matrix using the elementary division algorithm

with invertible ones, where divisor of is, divisor of , etc., and the polynomials leading coefficients have 1. These polynomials are called the invariant divisors of the characteristic matrix, the powers of the irreducible factors of the hot elementary divisors , and is the characteristic polynomial of , because (the determinant of the characteristic matrix does not change when multiplied by the invertible and ). is the minimal polynomial of .

Because of the invertibility of and , the module is now not only isomorphic (namely through ) to , but also isomorphic to . This factor module breaks down as a direct sum ; see also the theorem about invariant factors in finitely generated modules over a main ideal ring . The operation of the polynomial on the direct summand is represented by the accompanying matrix if a basis is chosen as in the previous section, and the operation of or on the entire module is represented by the Frobenius normal form.

If there is another matrix , it turns it into another module . An isomorphism must transfer the operation of , that is , which means that the matrix transforms from with respect to the selected base into . Similarity of matrices and is therefore synonymous with isomorphism of the associated modules and ; and the above-discussed decomposition into invariant factors has shown that this isomorphism is present if and only if the characteristic matrices and have the same elementary divisors. This statement is known as Frobenius' lemma .

As a further consequence of what has been shown, Cayley-Hamilton's theorem results : The operation of the characteristic polynomial makes all direct summands zero because all are divisors of . Therefore , if a matrix is ​​inserted into its characteristic polynomial, it is the zero mapping .

literature

  • Falko Lorenz: Lineare Algebra II , 3rd edition