# Transposed matrix

Animation for transposing a matrix

In mathematics , the transposed matrix , mirrored matrix or inverted matrix is the matrix that is created by swapping the roles of rows and columns in a given matrix. The first row of the transposed matrix corresponds to the first column of the output matrix, the second row to the second column and so on. The transposed matrix is ​​clearly created by mirroring the output matrix on its main diagonal . The process of converting a matrix into its transposed matrix is ​​called transposing , transposing, or overturning the matrix.

The transposition mapping that assigns its transpose to a matrix is ​​always bijective , linear and self-inverse . With regard to the matrix addition it provides a isomorphism is, with respect to the matrix multiplication , however, a Antiisomorphismus , that is, the order in the multiplication of matrices is reversed by transposition. Many parameters of matrices, such as track , rank , determinant and eigenvalues , are retained under transposition.

In linear algebra , the transposed matrix is ​​used, among other things, to characterize special classes of matrices. The transposed matrix is ​​also the mapping matrix of the dual mapping of a linear mapping between two finite-dimensional vector spaces with respect to the respective dual bases . Furthermore, it is also the mapping matrix of the adjoint mapping between two finite-dimensional real scalar product spaces with respect to the respective orthonormal bases . The concept of transposing a matrix was introduced in 1858 by the British mathematician Arthur Cayley .

## definition

If there is a field (in practice mostly the real or complex numbers ), then it is related to a given matrix${\ displaystyle K}$

${\ displaystyle A = (a_ {ij}) = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mn} \ end {pmatrix}} \ in K ^ {m \ times n}}$

transposed matrix defined as

${\ displaystyle A ^ {\ mathrm {T}} = (a_ {ji}) = {\ begin {pmatrix} a_ {11} & \ dots & a_ {m1} \\\ vdots && \ vdots \\ a_ {1n} & \ dots & a_ {mn} \ end {pmatrix}} \ in K ^ {n \ times m}}$.

The transposed matrix results from the fact that the roles of rows and columns of the output matrix are reversed. Clearly, the transposed matrix by arises mirroring the output matrix in its main diagonal with . Occasionally the transposed matrix is ​​also notated by , or . ${\ displaystyle A ^ {\ mathrm {T}}}$${\ displaystyle A}$ ${\ displaystyle a_ {11}, a_ {22}, \ dots, a_ {kk}}$${\ displaystyle k = \ min \ {m, n \}}$${\ displaystyle A ^ {\ top}}$${\ displaystyle A ^ {\ mathrm {t}}}$${\ displaystyle A '}$

## Examples

Transposing a matrix (a row vector ) results in a matrix (a column vector ) and vice versa: ${\ displaystyle (1 \ times 3)}$${\ displaystyle (3 \ times 1)}$

${\ displaystyle {\ begin {pmatrix} 2 & 4 & 6 \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} 2 \\ 4 \\ 6 \ end {pmatrix}}, \ quad {\ begin {pmatrix} 1 \\ 3 \\ 5 \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} 1 & 3 & 5 \ end {pmatrix}}}$

A square matrix retains its size through transposition, but all entries are mirrored on the main diagonal:

${\ displaystyle {\ begin {pmatrix} 2 & 3 \\ 4 & 5 \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} 2 & 4 \\ 3 & 5 \ end {pmatrix}}, \ quad {\ begin {pmatrix} 9 & 8 & 7 \\ 6 & 5 & 4 \\ 3 & 2 & 1 \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} 9 & 6 & 3 \\ 8 & 5 & 2 \\ 7 & 4 & 1 \ end {pmatrix}}}$

Transposing a matrix creates a matrix in which the first row corresponds to the first column of the output matrix and the second row corresponds to the second column of the output matrix: ${\ displaystyle (3 \ times 2)}$${\ displaystyle (2 \ times 3)}$

${\ displaystyle {\ begin {pmatrix} 1 & 4 \\ 8 & -2 \\ - 3 & 5 \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} 1 & 8 & -3 \\ 4 & -2 & 5 \ end {pmatrix}}}$

## properties

### total

The following applies to the transpose of the sum of two matrices of the same size ${\ displaystyle A, B \ in K ^ {m \ times n}}$

${\ displaystyle (A + B) ^ {\ mathrm {T}} = (a_ {ij} + b_ {ij}) ^ {\ mathrm {T}} = (a_ {ji} + b_ {ji}) = A ^ {\ mathrm {T}} + B ^ {\ mathrm {T}}}$.

In general, the sum of matrices of the same size results in ${\ displaystyle n}$${\ displaystyle A_ {1}, \ dotsc, A_ {n} \ in K ^ {m \ times n}}$

${\ displaystyle (A_ {1} + A_ {2} + \ dotsb + A_ {n}) ^ {\ mathrm {T}} = A_ {1} ^ {\ mathrm {T}} + A_ {2} ^ { \ mathrm {T}} + \ dotsb + A_ {n} ^ {\ mathrm {T}}}$.

The transpose of a sum of matrices is therefore equal to the sum of the transposed.

### Scalar multiplication

The following applies to the transpose of the product of a matrix with a scalar${\ displaystyle A \ in K ^ {m \ times n}}$${\ displaystyle c \ in K}$

${\ displaystyle (c \ cdot A) ^ {\ mathrm {T}} = (c \ cdot a_ {ij}) ^ {\ mathrm {T}} = (c \ cdot a_ {ji}) = c \ cdot A ^ {\ mathrm {T}}}$.

The transpose of the product of a matrix with a scalar is therefore equal to the product of the scalar with the transposed matrix.

### Double transposition

The following applies to the transpose of the transpose of a matrix${\ displaystyle A \ in K ^ {m \ times n}}$

${\ displaystyle \ left (A ^ {\ mathrm {T}} \ right) ^ {\ mathrm {T}} = (a_ {ji}) ^ {\ mathrm {T}} = (a_ {ij}) = A }$.

Therefore, by double transposition, the output matrix always results again.

### product

The following applies to the transpose of the product of a matrix with a matrix${\ displaystyle A \ in K ^ {m \ times n}}$${\ displaystyle B \ in K ^ {n \ times l}}$

${\ displaystyle (A \ cdot B) ^ {\ mathrm {T}} = \ left (\ sum _ {j = 1} ^ {n} a_ {ij} \ cdot b_ {jk} \ right) ^ {\ mathrm {T}} = \ left (\ sum _ {j = 1} ^ {n} a_ {kj} \ cdot b_ {ji} \ right) = \ left (\ sum _ {j = 1} ^ {n} b_ {ji} \ cdot a_ {kj} \ right) = B ^ {\ mathrm {T}} \ cdot A ^ {\ mathrm {T}}}$.

In general, the product of dies of the appropriate size results${\ displaystyle n}$${\ displaystyle A_ {1}, \ dotsc, A_ {n}}$

${\ displaystyle (A_ {1} \ cdot A_ {2} \ dotsm A_ {n}) ^ {\ mathrm {T}} = A_ {n} ^ {\ mathrm {T}} \ dotsm A_ {2} ^ { \ mathrm {T}} \ cdot A_ {1} ^ {\ mathrm {T}}}$.

The transpose of a product of matrices is therefore equal to the product of the transposed, but in reverse order.

### Inverse

The transpose of a regular matrix is also regular. The following applies to the transpose of the inverse of a regular matrix ${\ displaystyle A \ in K ^ {n \ times n}}$

${\ displaystyle \ left (A ^ {- 1} \ right) ^ {\ mathrm {T}} = \ left (A ^ {\ mathrm {T}} \ right) ^ {- 1}}$,

because with the identity matrix results ${\ displaystyle I \ in K ^ {n \ times n}}$

${\ displaystyle A ^ {\ mathrm {T}} \ cdot \ left (A ^ {- 1} \ right) ^ {\ mathrm {T}} = \ left (A ^ {- 1} \ cdot A \ right) ^ {\ mathrm {T}} = I ^ {\ mathrm {T}} = I}$

and therefore the inverse matrix is ​​too . The transpose of the inverse matrix is ​​therefore equal to the inverse of the transposed matrix. This matrix is ​​sometimes also referred to as. ${\ displaystyle (A ^ {- 1}) ^ {\ mathrm {T}}}$${\ displaystyle A ^ {\ mathrm {T}}}$${\ displaystyle A ^ {- T}}$

### Exponential and logarithm

The following applies for the matrix exponential of the transpose of a real or complex square matrix${\ displaystyle A \ in \ mathbb {K} ^ {n \ times n}}$

${\ displaystyle \ exp (A ^ {\ mathrm {T}}) = (\ exp A) ^ {\ mathrm {T}}}$.

Correspondingly, the transpose of a regular real or complex matrix applies to the matrix logarithm

${\ displaystyle \ ln (A ^ {\ mathrm {T}}) = (\ ln A) ^ {\ mathrm {T}}}$.

### Transposition mapping

The figure

${\ displaystyle K ^ {m \ times n} \ to K ^ {n \ times m}, \ quad A \ mapsto A ^ {\ mathrm {T}}}$,

which assigns its transpose to a matrix is called a transposition mapping. Due to the above laws, the transposition mapping has the following properties:

• The transposition mapping is always bijective , linear and self-inverse .
• The transposition mapping represents an isomorphism between the matrix spaces and .${\ displaystyle K ^ {m \ times n}}$${\ displaystyle K ^ {n \ times m}}$
• In the general linear group and in the matrix ring , the transposition map (for ) represents an anti-automorphism .${\ displaystyle \ operatorname {GL} (n, K)}$ ${\ displaystyle K ^ {n \ times n}}$${\ displaystyle m = n}$

### Block matrices

The transpose of a block matrix with row and column partitions is through ${\ displaystyle r}$${\ displaystyle s}$

${\ displaystyle {\ begin {pmatrix} A_ {11} & \ cdots & A_ {1s} \\\ vdots && \ vdots \\ A_ {r1} & \ cdots & A_ {rs} \ end {pmatrix}} ^ {\ mathrm {T}} = {\ begin {pmatrix} A_ {11} ^ {\ mathrm {T}} & \ cdots & A_ {r1} ^ {\ mathrm {T}} \\\ vdots && \ vdots \\ A_ {1s } ^ {\ mathrm {T}} & \ cdots & A_ {rs} ^ {\ mathrm {T}} \ end {pmatrix}}}$

given. It is created by mirroring all blocks on the main diagonal and then transposing each block.

## Parameters

### rank

For a matrix , the rank of the transposed matrix is ​​the same as that of the output matrix, that is ${\ displaystyle A \ in K ^ {m \ times n}}$

${\ displaystyle \ operatorname {rank} (A ^ {\ mathrm {T}}) = \ operatorname {rank} (A)}$.

The image of the figure will be supported by the column vectors of clamped while the image of the image of the row vectors of spanned. The dimensions of these two pictures always match. ${\ displaystyle x \ mapsto Ax}$${\ displaystyle A}$ ${\ displaystyle x \ mapsto A ^ {\ mathrm {T}} x}$${\ displaystyle A}$

### track

For a square matrix , the trace (the sum of the main diagonal elements ) of the transposed matrix is ​​equal to the trace of the output matrix, that is ${\ displaystyle A \ in K ^ {n \ times n}}$

${\ displaystyle \ operatorname {spur} (A ^ {\ mathrm {T}}) = \ operatorname {spur} (A)}$,

because the diagonal elements of the transposed matrix agree with those of the output matrix.

### Determinant

For a square matrix , the determinant of the transposed matrix is ​​equal to the determinant of the output matrix, that is ${\ displaystyle A \ in K ^ {n \ times n}}$

${\ displaystyle \ det (A ^ {\ mathrm {T}}) = \ det (A)}$.

This follows from the Leibniz formula for determinants about

${\ displaystyle \ det (A) = \ sum _ {\ sigma \ in S_ {n}} \ left (\ operatorname {sgn} (\ sigma) a_ {1, \ sigma (1)} \ cdots a_ {n, \ sigma (n)} \ right) = \ sum _ {\ sigma \ in S_ {n}} \ left (\ operatorname {sgn} (\ sigma) a _ {\ sigma (1), 1} \ cdots a _ {\ sigma (n), n} \ right) = \ det (A ^ {\ mathrm {T}})}$,

where the sum runs over all permutations of the symmetric group and denotes the sign of the permutation . ${\ displaystyle S_ {n}}$${\ displaystyle \ operatorname {sgn} (\ sigma)}$${\ displaystyle \ sigma}$

### spectrum

For a square matrix , due to the invariance of the determinant under transposition, the characteristic polynomial of the transposed matrix is ​​also identical to that of the output matrix, because ${\ displaystyle A \ in K ^ {n \ times n}}$

${\ displaystyle \ chi _ {A ^ {\ mathrm {T}}} (\ lambda) = \ det (\ lambda IA ​​^ {\ mathrm {T}}) = \ det ((\ lambda IA ​​^ {\ mathrm { T}}) ^ {\ mathrm {T}}) = \ det (\ lambda IA) = \ chi _ {A} (\ lambda)}$.

Therefore, the vote eigenvalues of the transposed matrix with which the output matrix match, that is, for the respective spectra applies

${\ displaystyle \ sigma (A ^ {\ mathrm {T}}) = \ sigma (A)}$.

However, the eigenvectors and eigenspaces do not have to match.

### similarity

Every square matrix is similar to its transpose, that is, there is a regular matrix such that ${\ displaystyle A \ in K ^ {n \ times n}}$${\ displaystyle S \ in K ^ {n \ times n}}$

${\ displaystyle A ^ {\ mathrm {T}} = S ^ {- 1} AS}$

applies. The matrix can even be chosen symmetrically . From this it follows, among other things, that a square matrix and its transpose have the same minimal polynomial and, provided that its characteristic polynomial is completely divided into linear factors , also have the same Jordanian normal form . ${\ displaystyle S}$

### Norms

The Euclidean norm of a real vector is through ${\ displaystyle x \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ | x \ | _ {2} = {\ sqrt {x ^ {\ mathrm {T}} x}}}$

given. The following applies to the Frobenius norm and the spectral norm of the transpose of a real or complex matrix${\ displaystyle A \ in {\ mathbb {K}} ^ {m \ times n}}$

${\ displaystyle \ | A ^ {\ mathrm {T}} \ | _ {F} = \ | A \ | _ {F}}$   and   .${\ displaystyle \ | A ^ {\ mathrm {T}} \ | _ {2} = \ | A \ | _ {2}}$

The row sum norm and the column sum norm of the transpose and the output matrix are related as follows:

${\ displaystyle \ | A ^ {\ mathrm {T}} \ | _ {\ infty} = \ | A \ | _ {1}}$   and   .${\ displaystyle \ | A ^ {\ mathrm {T}} \ | _ {1} = \ | A \ | _ {\ infty}}$

### Scalar products

The standard scalar product of two real vectors is through ${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle x, y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ langle x, y \ rangle = x ^ {\ mathrm {T}} y}$

given. With respect to the standard scalar product, a real matrix and its transpose have the displacement property${\ displaystyle A \ in \ mathbb {R} ^ {m \ times n}}$

${\ displaystyle \ langle Ax, y \ rangle = \ langle x, A ^ {\ mathrm {T}} y \ rangle}$

for all vectors and on. The standard scalar product im is on the left and the standard scalar product im is on the right . For the Frobenius scalar product of two matrices we have ${\ displaystyle x \ in \ mathbb {R} ^ {n}}$${\ displaystyle y \ in \ mathbb {R} ^ {m}}$${\ displaystyle \ mathbb {R} ^ {m}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle A, B \ in \ mathbb {R} ^ {m \ times n}}$

${\ displaystyle \ langle A, B \ rangle _ {F} = \ operatorname {spur} (A ^ {\ mathrm {T}} B) = \ operatorname {spur} (BA ^ {\ mathrm {T}}) = \ operatorname {spur} (AB ^ {\ mathrm {T}}) = \ langle A ^ {\ mathrm {T}}, B ^ {\ mathrm {T}} \ rangle _ {F}}$,

because matrices under the track can be interchanged cyclically .

## use

### Special matrices

The transposed matrix is ​​used in a number of definitions in linear algebra:

• A symmetric matrix is a square matrix that is equal to its transpose, that is .${\ displaystyle A ^ {\ mathrm {T}} = A}$
• A skew-symmetric matrix is a square matrix that is equal to the negative of its transpose, that is .${\ displaystyle A ^ {\ mathrm {T}} = - A}$
• A Hermitian matrix is a complex square matrix whose transpose is equal to its conjugate , that is .${\ displaystyle A ^ {\ mathrm {T}} = {\ bar {A}}}$
• A skewed Hermitian matrix is a complex square matrix whose transpose is equal to the negative of its conjugate, that is .${\ displaystyle A ^ {\ mathrm {T}} = - {\ bar {A}}}$
• An orthogonal matrix is a square matrix whose transpose is equal to its inverse, that is .${\ displaystyle A ^ {\ mathrm {T}} = A ^ {- 1}}$
• A (real) normal matrix is a real square matrix that commutes with its transpose , that is .${\ displaystyle A ^ {\ mathrm {T}} A = AA ^ {\ mathrm {T}}}$
• For any real matrix, the two Gram matrices and are always symmetric and positive semidefinite .${\ displaystyle A ^ {\ mathrm {T}} A}$${\ displaystyle AA ^ {\ mathrm {T}}}$
• The dyadic product of two vectors and gives the matrix .${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle xy ^ {\ mathrm {T}}}$

### Bilinear forms

If and are finite-dimensional vector spaces over the body , then every bilinear form can be defined by the representation matrix after choosing a basis for and a basis for${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$ ${\ displaystyle b \ colon V \ times W \ to K}$ ${\ displaystyle \ {v_ {1}, \ ldots, v_ {m} \}}$${\ displaystyle V}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {n} \}}$${\ displaystyle W}$

${\ displaystyle A_ {b} = (b (v_ {i}, w_ {j})) _ {ij} \ in K ^ {m \ times n}}$

describe. If and are the coordinate vectors of two vectors and , then the value of the bilinear form results as ${\ displaystyle x = (x_ {1}, \ ldots, x_ {m}) ^ {\ mathrm {T}}}$${\ displaystyle y = (y_ {1}, \ ldots, y_ {n}) ^ {\ mathrm {T}}}$${\ displaystyle v \ in V}$${\ displaystyle w \ in W}$

${\ displaystyle b (v, w) = x ^ {\ mathrm {T}} A_ {b} y}$.

If now and further bases are each of and , then applies to the corresponding representation matrix ${\ displaystyle \ {v '_ {1}, \ ldots, v' _ {m} \}}$${\ displaystyle \ {w '_ {1}, \ ldots, w' _ {n} \}}$${\ displaystyle V}$${\ displaystyle W}$

${\ displaystyle A_ {b '} = S ^ {\ mathrm {T}} A_ {b} T}$,

where the base change matrix is in and the base change matrix is ​​in . Two square matrices are therefore congruent to one another precisely then , so it is true ${\ displaystyle S \ in K ^ {m \ times m}}$${\ displaystyle V}$${\ displaystyle T \ in K ^ {n \ times n}}$${\ displaystyle W}$${\ displaystyle A, B \ in K ^ {n \ times n}}$

${\ displaystyle A = S ^ {\ mathrm {T}} BS}$

with a regular matrix if they represent the same bilinear form with respect to possibly different bases. ${\ displaystyle S \ in K ^ {n \ times n}}$${\ displaystyle b \ colon V \ times V \ to K}$

### Dual images

If again and are finite-dimensional vector spaces over the body with associated dual spaces and , then the dual mapping associated with a given linear map becomes through ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$ ${\ displaystyle V ^ {\ ast}}$${\ displaystyle W ^ {\ ast}}$ ${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle f ^ {\ ast} \ colon W ^ {\ ast} \ to V ^ {\ ast}}$

${\ displaystyle f ^ {\ ast} (\ varphi) = \ varphi \ circ f}$

characterized for all . If there is now a basis for and a basis for with associated dual bases and , then the relationship holds for the mapping matrices from and from${\ displaystyle \ varphi \ in W ^ {\ ast}}$${\ displaystyle \ {v_ {1}, \ ldots, v_ {m} \}}$${\ displaystyle V}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {n} \}}$${\ displaystyle W}$ ${\ displaystyle \ {v_ {1} ^ {\ ast}, \ ldots, v_ {m} ^ {\ ast} \}}$${\ displaystyle \ {w_ {1} ^ {\ ast}, \ ldots, w_ {n} ^ {\ ast} \}}$ ${\ displaystyle A_ {f} \ in K ^ {n \ times m}}$${\ displaystyle f}$${\ displaystyle A_ {f ^ {\ ast}} \ in K ^ {m \ times n}}$${\ displaystyle f ^ {\ ast}}$

${\ displaystyle A_ {f ^ {\ ast}} = A_ {f} ^ {\ mathrm {T}}}$.

The mapping matrix of the dual mapping with respect to the dual bases is accordingly precisely the transpose of the mapping matrix of the primal mapping with respect to the primal bases. In physics , this concept is used for covariant and contravariant vector quantities.

If now and are finite-dimensional real scalar product spaces , then the adjoint mapping belonging to a given linear mapping becomes through the relation ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle f ^ {\ ast} \ colon W \ to V}$

${\ displaystyle \ langle f (v), w \ rangle = \ langle v, f ^ {\ ast} (w) \ rangle}$

for all and characterized. Furthermore, if an orthonormal basis of , an orthonormal basis of and the mapping matrix of with respect to these bases, then the mapping matrix of with respect to these bases is even ${\ displaystyle v \ in V}$${\ displaystyle w \ in W}$${\ displaystyle \ {v_ {1}, \ ldots, v_ {m} \}}$${\ displaystyle V}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {n} \}}$${\ displaystyle W}$${\ displaystyle A_ {f} \ in \ mathbb {R} ^ {n \ times m}}$${\ displaystyle f}$${\ displaystyle A_ {f ^ {\ ast}} \ in \ mathbb {R} ^ {m \ times n}}$${\ displaystyle f ^ {\ ast}}$

${\ displaystyle A_ {f ^ {\ ast}} = A_ {f} ^ {\ mathrm {T}}}$.

In real matrices thus leading to a given matrix is adjoint matrix just the transposed matrix, ie . In functional analysis , this concept is generalized to adjoint operators between infinite-dimensional Hilbert spaces. ${\ displaystyle A ^ {\ ast} = A ^ {\ mathrm {T}}}$

### Permutations

The transposed matrix also defines special permutations . If the numbers from to are written row by row into a matrix and then read off again column by column (which corresponds exactly to a transposition of the matrix), a permutation of these numbers results , which by ${\ displaystyle (m \ times n)}$${\ displaystyle 1}$${\ displaystyle m \ cdot n}$${\ displaystyle \ pi}$

${\ displaystyle \ pi (n (i-1) + j) = i + m (j-1)}$

for and can be specified. The number of deficiencies and thus also the sign of can be explicitly passed through ${\ displaystyle i = 1, \ ldots, m}$${\ displaystyle j = 1, \ ldots, n}$${\ displaystyle \ pi}$

${\ displaystyle | \ operatorname {inv} (\ pi) | = {\ binom {m} {2}} {\ binom {n} {2}}}$   and   ${\ displaystyle \ operatorname {sgn} (\ pi) = (- 1) ^ {{\ tbinom {m} {2}} {\ tbinom {n} {2}}}}$

determine. In number theory , for example, in Zolotareff's lemma , these permutations are used to prove the quadratic reciprocity law .

## Generalizations

In a more general way, matrices with entries from a ring (possibly with one ) can also be considered, whereby a large part of the properties of transposed matrices is retained. In arbitrary rings, however, the column rank of a matrix does not have to match its row rank. The product formula and the representation of determinants are only valid in commutative rings .

## literature

Original work

• Arthur Cayley : A memoir on the theory of matrices . In: Philosophical Transactions of the Royal Society of London . tape 148 , 1858, pp. 17-37 ( online ).

## Individual evidence

1. ^ Christian Voigt, Jürgen Adamy: Collection of formulas for the matrix calculation . Oldenbourg Verlag, 2007, p. 9 .
2. Eberhard Oeljeklaus, Reinhold Remmert: Linear Algebra I . Springer, 2013, p. 153 .
3. ^ O. Taussky, H. Zassenhaus: On the similarity transformation of matrix and its transpose . In: Pacific J. Math. Band 9 , 1959, pp. 893-896 .
4. ^ Franz Lemmermeyer: Reciprocity Laws: From Euler to Eisenstein . Springer, 2000, p. 32 .