The adjoint , classical adjoint (not to be confused with the real adjoint matrix ) or complementary matrix of a matrix is a term from the mathematical branch of linear algebra . It denotes the transpose of the cofactor matrix , i.e. the transpose of the matrix whose entries are the signed minors (sub- determinants ).
With the help of the adjuncts one can calculate the inverse of a regular square matrix.
definition
The adjunct of a square matrix with entries from a field (or more generally from a commutative ring ) is defined as
adj
(
A.
)
{\ displaystyle \ operatorname {adj} (A)}
A.
∈
K
n
×
n
{\ displaystyle A \ in K ^ {n \ times n}}
K
{\ displaystyle K}
adj
(
A.
)
=
Cof
(
A.
)
T
=
A.
~
T
=
(
a
~
11
a
~
12
⋯
a
~
1
n
a
~
21st
a
~
22nd
a
~
2
n
⋮
⋱
⋮
a
~
n
1
a
~
n
2
⋯
a
~
n
n
)
T
=
(
a
~
11
a
~
21st
⋯
a
~
n
1
a
~
12
a
~
22nd
a
~
n
2
⋮
⋱
⋮
a
~
1
n
a
~
2
n
⋯
a
~
n
n
)
{\ displaystyle \ operatorname {adj} (A) = \ operatorname {Cof} (A) ^ {T} = {\ tilde {A}} ^ {T} = {\ begin {pmatrix} {\ tilde {a}} _ {11} & {\ tilde {a}} _ {12} & \ cdots & {\ tilde {a}} _ {1n} \\ {\ tilde {a}} _ {21} & {\ tilde {a }} _ {22} && {\ tilde {a}} _ {2n} \\\ vdots && \ ddots & \ vdots \\ {\ tilde {a}} _ {n1} & {\ tilde {a}} _ {n2} & \ cdots & {\ tilde {a}} _ {nn} \ end {pmatrix}} ^ {T} = {\ begin {pmatrix} {\ tilde {a}} _ {11} & {\ tilde {a}} _ {21} & \ cdots & {\ tilde {a}} _ {n1} \\ {\ tilde {a}} _ {12} & {\ tilde {a}} _ {22} && { \ tilde {a}} _ {n2} \\\ vdots && \ ddots & \ vdots \\ {\ tilde {a}} _ {1n} & {\ tilde {a}} _ {2n} & \ cdots & { \ tilde {a}} _ {nn} \ end {pmatrix}}}
.
It should be noted that the cofactor is in place. The cofactors add up
(
j
,
i
)
{\ displaystyle (j, i)}
a
~
i
j
{\ displaystyle {\ tilde {a}} _ {ij}}
a
~
i
j
{\ displaystyle {\ tilde {a}} _ {ij}}
a
~
i
j
=
(
-
1
)
i
+
j
⋅
M.
i
j
=
(
-
1
)
i
+
j
⋅
det
(
a
1
,
1
⋯
a
1
,
j
-
1
a
1
,
j
+
1
⋯
a
1
,
n
⋮
⋱
⋮
⋮
⋱
⋮
a
i
-
1
,
1
⋯
a
i
-
1
,
j
-
1
a
i
-
1
,
j
+
1
⋯
a
i
-
1
,
n
a
i
+
1
,
1
⋯
a
i
+
1
,
j
-
1
a
i
+
1
,
j
+
1
⋯
a
i
+
1
,
n
⋮
⋱
⋮
⋮
⋱
⋮
a
n
,
1
⋯
a
n
,
j
-
1
a
n
,
j
+
1
⋯
a
n
,
n
)
{\ displaystyle {\ tilde {a}} _ {ij} = (- 1) ^ {i + j} \ cdot M_ {ij} = (- 1) ^ {i + j} \ cdot \ det {\ begin { pmatrix} a_ {1,1} & \ cdots & a_ {1, j-1} & a_ {1, j + 1} & \ cdots & a_ {1, n} \\\ vdots & \ ddots & \ vdots & \ vdots & \ ddots & \ vdots \\ a_ {i-1,1} & \ cdots & a_ {i-1, j-1} & a_ {i-1, j + 1} & \ cdots & a_ {i-1, n} \ \ a_ {i + 1,1} & \ cdots & a_ {i + 1, j-1} & a_ {i + 1, j + 1} & \ cdots & a_ {i + 1, n} \\\ vdots & \ ddots & \ vdots & \ vdots & \ ddots & \ vdots \\ a_ {n, 1} & \ cdots & a_ {n, j-1} & a_ {n, j + 1} & \ cdots & a_ {n, n} \ end {pmatrix}}}
.
The minors are therefore the values of the sub- determinants of the matrix , which result from deleting the -th row and -th column.
M.
i
j
{\ displaystyle M_ {ij}}
A.
{\ displaystyle A}
i
{\ displaystyle i}
j
{\ displaystyle j}
Since the adjuncts rarely appear in today's textbooks and the notation is not always clear in older works, caution is advised. Often the same notation is used for the adjoints and the adjoints (i.e. their transpose for real matrices , their conjugate- transposed for complex matrices ).
Examples
(2 × 2) matrix
Any matrix has the form
2
×
2
{\ displaystyle 2 \ times 2}
A.
=
(
a
b
c
d
)
{\ displaystyle A = {\ begin {pmatrix} {a} & {b} \\ {c} & {d} \ end {pmatrix}}}
The adjunct to this matrix is
adj
(
A.
)
=
(
d
-
c
-
b
a
)
T
=
(
d
-
b
-
c
a
)
{\ displaystyle \ operatorname {adj} (A) = {\ begin {pmatrix} \, \, \, {d} & \! \! {- c} \\ {- b} & {a} \ end {pmatrix }} ^ {T} = {\ begin {pmatrix} \, \, \, {d} & \! \! {- b} \\ {- c} & {a} \ end {pmatrix}}}
(3 × 3) matrix
Any matrix has the form
3
×
3
{\ displaystyle 3 \ times 3}
A.
=
(
a
b
c
d
e
f
G
H
i
)
{\ displaystyle A = {\ begin {pmatrix} a & b & c \\ d & e & f \\ g & h & i \ end {pmatrix}}}
The adjunct to this matrix is
adj
(
A.
)
=
(
det
(
e
f
H
i
)
-
det
(
d
f
G
i
)
det
(
d
e
G
H
)
-
det
(
b
c
H
i
)
det
(
a
c
G
i
)
-
det
(
a
b
G
H
)
det
(
b
c
e
f
)
-
det
(
a
c
d
f
)
det
(
a
b
d
e
)
)
T
=
(
e
i
-
f
H
f
G
-
d
i
d
H
-
e
G
c
H
-
b
i
a
i
-
c
G
b
G
-
a
H
b
f
-
c
e
c
d
-
a
f
a
e
-
b
d
)
T
=
(
e
i
-
f
H
c
H
-
b
i
b
f
-
c
e
f
G
-
d
i
a
i
-
c
G
c
d
-
a
f
d
H
-
e
G
b
G
-
a
H
a
e
-
b
d
)
{\ displaystyle {\ begin {aligned} \ operatorname {adj} (A) & = {\ begin {pmatrix} \ quad \ det {\ begin {pmatrix} e & f \\ h & i \ end {pmatrix}} & - \ det { \ begin {pmatrix} d & f \\ g & i \ end {pmatrix}} & \ quad \ det {\ begin {pmatrix} d & e \\ g & h \ end {pmatrix}} \\ - \ det {\ begin {pmatrix} b & c \\ h & i \ end {pmatrix}} & \ quad \ det {\ begin {pmatrix} a & c \\ g & i \ end {pmatrix}} & - \ det {\ begin {pmatrix} a & b \\ g & h \ end {pmatrix}} \\ \ quad \ det {\ begin {pmatrix} b & c \\ e & f \ end {pmatrix}} & - \ det {\ begin {pmatrix} a & c \\ d & f \ end {pmatrix}} & \ quad \ det {\ begin {pmatrix } a & b \\ d & e \ end {pmatrix}} \ end {pmatrix}} ^ {T} \\ [. 7em] & = {\ begin {pmatrix} ei-fh & fg-di & dh-eg \\ ch-bi & ai-cg & bg- ah \\ bf-ce & cd-af & ae-bd \ end {pmatrix}} ^ {T} \\ [. 7em] & = {\ begin {pmatrix} ei-fh & ch-bi & bf-ce \\ fg-di & ai-cg & cd-af \\ dh-eg & bg-ah & ae-bd \ end {pmatrix}} \ end {aligned}}}
properties
The following relationships apply to all matrices
K
n
×
n
{\ displaystyle K ^ {n \ times n}}
adj
(
E.
)
=
E.
{\ displaystyle \ operatorname {adj} (E) = E}
, where is an identity matrix .
E.
{\ displaystyle E}
adj
(
0
)
=
0
{\ displaystyle \ operatorname {adj} (0) = 0}
for , where 0 is the zero matrix . For matrices always, but also applies to the zero matrix: .
n
>
1
{\ displaystyle n> 1}
1
×
1
{\ displaystyle 1 \ times 1}
A.
=
[
a
11
]
{\ displaystyle A = [a_ {11}]}
adj
(
[
a
11
]
)
=
[
1
]
{\ displaystyle \ operatorname {adj} ([a_ {11}]) = [1]}
adj
(
A.
B.
)
=
adj
(
B.
)
⋅
adj
(
A.
)
{\ displaystyle \ operatorname {adj} (AB) = \ operatorname {adj} (B) \ cdot \ operatorname {adj} (A)}
adj
(
A.
T
)
=
adj
(
A.
)
T
{\ displaystyle \ operatorname {adj} (A ^ {T}) = \ operatorname {adj} (A) ^ {T}}
A.
⋅
adj
(
A.
)
=
adj
(
A.
)
⋅
A.
=
det
(
A.
)
⋅
E.
{\ displaystyle A \ cdot \ operatorname {adj} (A) = \ operatorname {adj} (A) \ cdot A = \ det (A) \ cdot E}
adj
(
λ
A.
)
=
λ
n
-
1
adj
(
A.
)
{\ displaystyle \ operatorname {adj} (\ lambda A) = \ lambda ^ {n-1} \ operatorname {adj} (A)}
in which
λ
∈
K
{\ displaystyle \ lambda \ in K}
det
(
adj
(
A.
)
)
=
(
det
A.
)
n
-
1
{\ displaystyle \ det (\ operatorname {adj} (A)) = (\ det A) ^ {n-1}}
adj
(
adj
(
A.
)
)
=
(
det
A.
)
n
-
2
A.
{\ displaystyle \ operatorname {adj} (\ operatorname {adj} (A)) = (\ det A) ^ {n-2} A}
, especially for matrices
2
×
2
{\ displaystyle 2 \ times 2}
adj
(
adj
(
A.
)
)
=
A.
{\ displaystyle \ operatorname {adj} (\ operatorname {adj} (A)) = A}
The following also applies to invertible matrices
(
adj
(
A.
)
)
-
1
=
1
det
(
A.
)
A.
=
adj
(
A.
-
1
)
{\ displaystyle (\ operatorname {adj} (A)) ^ {- 1} = {\ frac {1} {\ det (A)}} A = \ operatorname {adj} (A ^ {- 1})}
Computing the inverse of a matrix
The individual columns of the inverse of a matrix are each formed by the solution of the system of equations with the -th unit vector on the right-hand side. If you calculate this with Cramer's rule , you get the formula
A.
{\ displaystyle A}
A.
x
=
e
j
{\ displaystyle Ax = e_ {j}}
j
{\ displaystyle j}
A.
-
1
=
1
det
(
A.
)
adj
(
A.
)
.
{\ displaystyle A ^ {- 1} = {\ frac {1} {\ det (A)}} \, \ operatorname {adj} (A).}
An invertible matrix can thus be inverted in a very simple way:
2
×
2
{\ displaystyle 2 \ times 2}
A.
-
1
=
1
det
(
A.
)
adj
(
A.
)
=
1
a
d
-
b
c
(
d
-
b
-
c
a
)
{\ displaystyle A ^ {- 1} = {\ frac {1} {\ det (A)}} \, \ operatorname {adj} (A) = {\ frac {1} {ad-bc}} {\ begin {pmatrix} \, \, \, {d} & \! \! {- b} \\ {- c} & {a} \ end {pmatrix}}}
literature
<img src="https://de.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;">