The algorithm by Samuelson-Berkowitz (after Paul A. Samuelson and S. Berkowitz) is a method that determines the coefficients of the characteristic polynomial of for arbitrary quadratic input matrices , i.e. H. especially the determinant of . In contrast to the algorithm of Faddejew-Leverrier, for example , the requirements are less restrictive: matrices are also permitted as input , the entries of which are elements of any commutative ring with one element, so that the method works completely without divisions.
![{\ displaystyle A \ in R ^ {n \ times n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c92c05e1106c48e72897dabd111e38a742d477f7)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![R.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b0bfb3769bf24d80e15374dc37b0441e2616e33)
Notation and idea of the procedure
We denote with
-
the - identity matrix
-
the submatrix of consisting of the first rows and columns![r \ times r](https://wikimedia.org/api/rest_v1/media/math/render/svg/8cafa6704e1fd8023c39b942b2158f07f9bf1fc5)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![r](https://wikimedia.org/api/rest_v1/media/math/render/svg/0d1ecb613aa2984f0576f70f86650b7c2a132538)
-
is the characteristic polynomial of , where![{\ displaystyle A_ {r} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f2b25091ea7d43246cb77cd2a4db4b758b5659a)
-
the line vector with the components with![{\ displaystyle a_ {r + 1, j} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b256192284e754019ba3895916085add2f74f098)
-
the column vector with the components with![{\ displaystyle a_ {i, r + 1} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2f0d55cdb469b7114d4873515125227c898ccae8)
and consider the following partitioning of :
![{\ displaystyle A_ {r + 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/513519517c969b069c92adff6616a6db28091820)
![{\ displaystyle A_ {r + 1} = \ left [{\ begin {array} {c | c} A_ {r} & S_ {r} \\\ hline R_ {r} & a_ {r + 1, r + 1} \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/610c4d583bdd413bc23d0733cc6a7373d10ebcdc)
The basic idea of the procedure is
to compute the characteristic polynomial of recursive. With the above notation, the following applies initially
![{\ displaystyle A_ {r + 1} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77aac85c695043ce81cb45584fc5383c9d0f374f)
![{\ displaystyle \ det (A_ {r + 1}) = a_ {r + 1, r + 1} \ det (A_ {r}) - R_ {r} \; {\ textrm {Adj}} (A_ {r }) \; S_ {r} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b7df3383ca026123e9802f19314305493b7f9e5c)
where denotes the adjuncts of (reason: development after the last line using Laplace's expansion theorem , cf.).
![{\ displaystyle {\ textrm {Adj}} (A_ {r})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d187c2985ff2d4a3565c75b118293301fba55b07)
![{\ displaystyle A_ {r} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f2b25091ea7d43246cb77cd2a4db4b758b5659a)
This means especially for the characteristic polynomial of :
![{\ displaystyle A_ {r + 1} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77aac85c695043ce81cb45584fc5383c9d0f374f)
![{\ displaystyle {\ begin {aligned} p_ {r + 1} (\ lambda) & = \ det (\ lambda I_ {r + 1} -A_ {r + 1}) \\ & = (\ lambda -a_ { r + 1, r + 1}) \; p_ {r} (\ lambda) -R_ {r} \; {\ textrm {Adj}} (\ lambda I_ {r} -A_ {r}) \; S_ { r} \ qquad (*) \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1b80d5aafdaa77272534e04d532c3314b22cc31a)
In addition, one can easily show that the adjuncts in (*) can be written as follows (see e.g.):
![{\ displaystyle {\ begin {aligned} {\ textrm {Adj}} (\ lambda I_ {r} -A_ {r}) & = \ sum \ limits _ {k = 1} ^ {r} \ left (\ sum \ limits _ {j = 1} ^ {k} A_ {r} ^ {j-1} c_ {kj} \ right) \ lambda ^ {rk} \\ & = \ sum \ limits _ {k = 1} ^ {r} \ left (A_ {r} ^ {k-1} c_ {0} + \ ldots + A_ {r} ^ {1} c_ {k-2} + I_ {r} c_ {k-1} \ right) \; \ lambda ^ {rk} \ qquad (**) \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f6dfdbf7a049e5ed1bfbf8d30d1b19c6e8e8f58)
Herein are the coefficients of the characteristic polynomial of .
![{\ displaystyle c_ {0}, \ ldots, c_ {r-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bbfc6f3cb0cb092196a0d5d7e774213ad81838b1)
![{\ displaystyle A_ {r}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bfc925d70377b6da729e56bcc6313a7825a19b52)
Samuelson's formula
We now get the desired recursive representation for the characteristic polynomial of ( called Samuelson's formula in literature ) by combining the above two relationships (*) and (**):
![{\ displaystyle p_ {r + 1} (\ lambda) \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/939b35bc3487b9eebaac9a22856d793083d53116)
![{\ displaystyle A_ {r + 1} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77aac85c695043ce81cb45584fc5383c9d0f374f)
![{\ displaystyle {\ begin {aligned} p_ {r + 1} (\ lambda) & = (\ lambda -a_ {r + 1, r + 1}) \; p_ {r} (\ lambda) - \ sum _ {k = 1} ^ {r} \ left [\ sum _ {j = 1} ^ {k} (R_ {r} A_ {r} ^ {j-1} S_ {r}) c_ {kj} \ right ] \ lambda ^ {rk} \\ & = (\ lambda -a_ {r + 1, r + 1}) \; p_ {r} (\ lambda) - \ sum _ {k = 1} ^ {r} \ left [(R_ {r} A_ {r} ^ {k-1} S_ {r}) c_ {0} + \ ldots + R_ {r} A_ {r} ^ {1} S_ {r} c_ {k- 2} + (R_ {r} S_ {r}) c_ {k-1} \ right] \ lambda ^ {rk} \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/10c8947199339ff0e547d23f6d726b9bec508b7d)
Samuelson-Berkowitz method
Matrix vector notation
In order to be able to formulate an effective and more easily readable algorithm, we now transfer Samuelson's formula in matrix notation. To do this, we assign a polynomial of degree![{\ displaystyle \ omega \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fc0b8192bb3c7210103476a6df8107f3059a2b8a)
![{\ displaystyle \ omega (\ lambda) = \ sum _ {k = 0} ^ {d} \ alpha _ {k} \ lambda ^ {dk}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/66411fcae2e9160c07f8e097515942d9272fe023)
the coefficient vector
![{\ displaystyle {\ overrightarrow {\ omega}} = \ left [{\ begin {array} {c} \ alpha _ {0} \\\ alpha _ {1} \\\ alpha _ {2} \\\ vdots \\\ alpha _ {d} \ end {array}} \ right] \ in R ^ {d + 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/19f33f5dcc873f3e3baa7739364083da94790db7)
as well as the following Toeplitz matrix (which is also a lower triangular matrix):
![{\ displaystyle {\ textrm {Toep}} (\ omega) = \ left [{\ begin {array} {ccccc} \ alpha _ {0} & 0 & 0 & \ ldots & 0 \\\ alpha _ {1} & \ alpha _ { 0} & 0 & \ ldots & 0 \\\ alpha _ {2} & \ alpha _ {1} & \ alpha _ {0} & \ ldots & 0 \\\ vdots & \ vdots & \ vdots & \ ddots & \ vdots \\ \ cdot & \ cdot & \ cdot & \ ldots & a_ {0} \\\ alpha _ {d} & \ alpha _ {d-1} & \ alpha _ {d-2} & \ ldots & a_ {1} \ end {array}} \ right] \ in R ^ {(d + 1) \ times d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b921aca2d28649704580011e020b5029892b03d8)
More precisely, the entry at the position of is given by
![(i, j)](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ef21910f980c6fca2b15bee102a7a0d861ed712)
![{\ displaystyle {\ textrm {Toep}} (\ omega)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/422728dacd87a720d8e2b8bb2a3d3418db2d0aba)
![{\ displaystyle \ left [{\ textrm {Toep}} (\ omega) \ right] _ {i, j} = \ left \ {{\ begin {array} {cl} \ alpha _ {k} & {\ textrm {if}} \; i \ geq j \; {\ textrm {and}} \; ij = k \\ 0 & {\ textrm {if}} \; i <j \ end {array}} \ right.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f53c13c903c440a025393971a02789a52035072)
We now have defined the polynomial by
![{\ displaystyle q_ {r + 1} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e94e4028306257da07d6bfd77286c4c90b59ff26)
![{\ displaystyle {\ begin {aligned} q_ {r + 1} (\ lambda) & = \ lambda ^ {r + 1} -a_ {r + 1, r + 1} \ lambda ^ {r} - \ sum \ limits _ {k = 1} ^ {r} (R_ {r} A_ {r} ^ {k-1} S_ {r}) \ lambda ^ {rk} \\ & = \ lambda ^ {r + 1} - a_ {r + 1, r + 1} \ lambda ^ {r} - (R_ {r} S_ {r}) \ lambda ^ {r-1} - \ ldots - (R_ {r} A_ {r} ^ { r-1} S_ {r}) \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ded358f1505f49a8ff01ff7db119ec01fe94b8cc)
then Samuelson's formula can be represented in the following compact form (cf. and):
![{\ displaystyle {\ overrightarrow {p_ {r + 1}}} = {\ textrm {Toep}} (q_ {r + 1}) {\ overrightarrow {p_ {r}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cbbbb7a23ac18b27c4a79f74e65f6968d1d26e62)
Algebraic top-level formulation
By successively applying this principle, the following central statement is obtained (see and):
With , well
![{\ displaystyle p_ {1} (\ lambda) = \ lambda -a_ {11} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c54bb4d7bb3d7bd00ced22b1f5ef9016eeb1403b)
![{\ displaystyle {\ overrightarrow {p_ {1}}} = \ left [{\ begin {array} {c} 1 \\ - a_ {11} \ end {array}} \ right] = {\ textrm {Toep} } (q_ {1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c09d69a7f2430ed5fa41ae89acefdf8601893c2)
applies:
- The coefficients of the characteristic polynomial of for are given by:
![{\ displaystyle p_ {r} (\ lambda) \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf5a56f874e6c75fd70bec3aa6b10145cdc87c80)
![{\ displaystyle A_ {r} \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f2b25091ea7d43246cb77cd2a4db4b758b5659a)
![{\ displaystyle 1 \ leq r \ leq n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97c285f65976ce516e4afcecde719fab26e059f0)
![{\ displaystyle {\ overrightarrow {p_ {r}}} = {\ textrm {Toep}} (q_ {r}) \ cdot {\ textrm {Toep}} (q_ {r-1}) \ cdots {\ textrm { Toep}} (q_ {1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a6538d4f982aa02e6a1ae534fc9b457195eb7cd)
- In particular, if , we get the coefficients of the characteristic polynomial of by:
![{\ displaystyle r = n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cd9e24b8d7d089b120f8b65983bbfa97de38a071)
![{\ displaystyle p_ {n} (\ lambda) \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2da626d1fe002c8821b89f00d07046b16d94625b)
![{\ displaystyle A \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f61491be9bf07fe6ba295510dc36c210469fe1cf)
![{\ displaystyle {\ overrightarrow {p_ {n}}} = {\ textrm {Toep}} (q_ {n}) \ cdot {\ textrm {Toep}} (q_ {n-1}) \ cdots {\ textrm { Toep}} (q_ {1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8e7a086fa6b36632a2fbdad360c5ca74832a03d)
algorithm
With this one can now formulate the following algorithm (cf.):
*
*
The algorithm not only calculates the coefficients of the characteristic polynomial of , but also in each loop iteration for the sub-matrix .
![{\ displaystyle A_ {r}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bfc925d70377b6da729e56bcc6313a7825a19b52)
Historical
The basic idea of the process was first described and published in 1942 by Paul A. Samuelson . The algorithm in the form presented above and in use today goes back to Berkowitz (parallel version) and Abdeljaoued (description as a serial procedure), which is why the name Samuelson-Berkowitz-Abdeljaoued algorithm (SBA algorithm) is sometimes found in the literature.
Correctness of the algorithm
Since only finite loops occur in the method formulated above, it is clear that the algorithm terminates . The partial correctness follows from Samuelson's formula and the algebraic top-level formulation derived from it in matrix-vector form (see above, cf. e.g.). More precisely, the correctness is based on the following loop invariant : At the end of the -th loop pass, the vector contains the coefficients of the characteristic polynomial of
(formulation as postcondition ).
![r](https://wikimedia.org/api/rest_v1/media/math/render/svg/0d1ecb613aa2984f0576f70f86650b7c2a132538)
![{\ displaystyle {\ overrightarrow {\ textrm {Vect}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fef35b4e38483b51409f9d9a2fd77b45f244bd0b)
![{\ displaystyle A_ {r + 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/513519517c969b069c92adff6616a6db28091820)
Effort, efficiency and parallelization
It can be shown that the effort (time complexity) of the algorithm is of the order of magnitude . A more precise limit is given by the number of arithmetic operations . When implementing the method, one can also take advantage of the fact that there are effective methods for the multiplication of Toeplitz matrices. The algorithm can also be parallelized very well; more details can be found specifically in.
![{\ displaystyle {\ mathcal {O}} (n ^ {4})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/25dcc2a2a37c4bfafb26542489d56d0e6adacf8e)
![{\ displaystyle {\ frac {1} {2}} n ^ {4} -n ^ {3} + {\ frac {5} {2}} n ^ {2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/91fb182f797622bb09b4d0db3a2bca00c09ed842)
Numerical example
We look at the matrix
![{\ displaystyle A = \ left [{\ begin {array} {rrrr} 5 & 5 & -3 & -7 \\ 2 & 1 & 9 & 6 \\ 4 & 2 & -6 & -5 \\ 5 & -8 & -9 & 2 \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0b255f7be3c14ac25b97a6f01131ab109ba9c109)
- We start the recursion with the characteristic polynomial of the matrix for which applies, i.e. H.
![{\ displaystyle A_ {1} = [5]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b44423fc9966ef6df132c31e4fe48d11388b686f)
![{\ displaystyle p_ {1} = \ lambda -5 \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7eb9f3404c23ab467607d1b1cce04633afd6cb97)
![{\ displaystyle {\ overrightarrow {\ textrm {Vect}}} = \ left [{\ begin {array} {r} 1 \\ - 5 \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9ecb356fe11af7fe94ceb3ad3c32ccc8d88e1a8f)
-
:
- We now calculate . For this we first need the coefficients of :
![p_ {2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/43f1b08d7d69712872e051c2b33fdfa9f5d42319)
![q_2](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd2d05084feb02b8ba29b0673440fb673b102589)
-
:
![{\ displaystyle -R_ {1} S_ {1} = - 2 * 5 = -10 \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4c5b1452afece07877a14a787435f5e16faa543d)
- So
![{\ displaystyle {\ begin {aligned} q_ {2} & = \ lambda ^ {2} -a_ {22} \ lambda -R_ {1} S_ {1} \\ & = \ lambda ^ {2} -1 \ lambda -10 \; \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b9921a7f3efeacba10782309e964137ba556cef)
- The Toeplitz matrix now results from this
![{\ displaystyle {\ textrm {Toep}} (q_ {2}) = \ left [{\ begin {array} {rr} 1 & 0 \\ - 1 & 1 \\ - 10 & -1 \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8313ffd21cd390ec2774beffe4f273eb5d72bafe)
- and thus
![{\ displaystyle {\ begin {aligned} {\ textrm {Toep}} (q_ {2}) * {\ overrightarrow {p_ {1}}} & = {\ overrightarrow {\ textrm {Vect}}} \\\ left [{\ begin {array} {rr} 1 & 0 \\ - 1 & 1 \\ - 10 & -1 \ end {array}} \ right] * \ left [{\ begin {array} {r} 1 \\ - 5 \ end {array}} \ right] & = \ left [{\ begin {array} {r} 1 \\ - 6 \\ - 5 \ end {array}} \ right] \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/077c85110ec80dbe1425b41e97e718b22b5d58f3)
- So the characteristic polynomial of is
![{\ displaystyle A_ {2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3ec73b8bc9abc3efb934f5a6ec2803713771f4bc)
-
:
- We find the coefficients of :
![{\ displaystyle q_ {3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c188711ffce607c8dd7504a6dcb52196e8b670b8)
-
:
-
-
:
![{\ displaystyle -R_ {2} A_ {2} ^ {1} S_ {2} = - \ left [4 \; \; 2 \ right] \ cdot \ left [{\ begin {array} {rr} 5 & 5 \ \ 2 & 1 \ end {array}} \ right] \ cdot \ left [{\ begin {array} {r} -3 \\ 9 \ end {array}} \ right] = - 126}](https://wikimedia.org/api/rest_v1/media/math/render/svg/875216db4b1758db6c2fac0f0386e75fd09ff131)
- So
![{\ displaystyle {\ begin {aligned} q_ {3} (\ lambda) & = \ lambda ^ {3} -a_ {3,3} \ lambda ^ {2} - (R_ {2} S_ {2}) \ lambda ^ {1} - (R_ {2} A_ {2} ^ {1} S_ {2}) \\ & = \ lambda ^ {3} +6 \ lambda ^ {2} -6 \ lambda -126 \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47c5de15f07d0b78f621927e37c31ebbd401fa44)
- and
![{\ displaystyle {\ textrm {Toep}} (q_ {3}) = \ left [{\ begin {array} {rrr} 1 & 0 & 0 \\ 6 & 1 & 0 \\ - 6 & 6 & 1 \\ - 126 & -6 & 6 \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/61ecf814827dd1fa4a6bcb7464d58fac3db0404e)
- With that we get
![{\ displaystyle {\ begin {aligned} {\ textrm {Toep}} (q_ {3}) * {\ overrightarrow {p_ {2}}} & = {\ overrightarrow {\ textrm {Vect}}} \\\ left [{\ begin {array} {rrr} 1 & 0 & 0 \\ 6 & 1 & 0 \\ - 6 & 6 & 1 \\ - 126 & -6 & 6 \ end {array}} \ right] \ cdot \ left [{\ begin {array} {r} 1 \\ -6 \\ - 5 \ end {array}} \ right] & = \ left [{\ begin {array} {r} 1 \\ 0 \\ - 47 \\ - 120 \ end {array}} \ right] \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a4e6c92fb15ed27ddaecfd503976fead05f348d6)
- The characteristic polynomial of is therefore
![{\ displaystyle A_ {3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e3785baf5f5c5e129c758b7ffafc94d559d799df)
-
:
- We find the coefficients of :
![{\ displaystyle q_ {4}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4f2342d73d7213eb9c47db62f7ec31172d5cc43)
-
:
-
-
:
-
-
:
![{\ displaystyle -R_ {3} A_ {3} ^ {2} S_ {3} = - \ left [5 \; \; - 8 \; \; - 9 \ right] \ cdot \ left [{\ begin { array} {rrr} 5 & 5 & -3 \\ 2 & 1 & 9 \\ 4 & 2 & -6 \ end {array}} \ right] ^ {2} \ cdot \ left [{\ begin {array} {r} -7 \\ 6 \\ -5 \ end {array}} \ right] = 679}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c0a108087d511990964d4fec6b352c114812dec6)
- So
![{\ displaystyle {\ begin {aligned} q_ {4} (\ lambda) & = \ lambda ^ {4} -a_ {4,4} \ lambda ^ {3} - (R_ {3} S_ {3}) \ lambda ^ {2} - (R_ {3} A_ {2} ^ {1} S_ {3}) \ lambda - (R_ {3} A_ {2} ^ {2} S_ {3}) \\ & = \ lambda ^ {4} -2 \ lambda ^ {3} +38 \ lambda ^ {2} -348 \ lambda +679 \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/795148af181805ec0c6fc278cdda3cb557187dae)
- and
![{\ displaystyle {\ textrm {Toep}} (q_ {4}) = \ left [{\ begin {array} {rrrr} 1 & 0 & 0 & 0 \\ - 2 & 1 & 0 & 0 \\ 38 & -2 & 1 & 0 \\ - 348 & 38 & -2 & 1 \\ 679 & -348 & 38 & -2 \ end {array}} \ right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6b199722db4ef641c2406be9c0b95c187df2c682)
- The final matrix-vector multiplication now provides the coefficients of the characteristic polynomial sought for the entire matrix :
![{\ displaystyle A = A_ {4}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7256fac1d03d1bb289ddf5169766cf4f7f3d48fb)
![{\ displaystyle {\ begin {aligned} {\ textrm {Toep}} (q_ {4}) * {\ overrightarrow {p_ {3}}} & = {\ overrightarrow {\ textrm {Vect}}} \\\ left [{\ begin {array} {rrrr} 1 & 0 & 0 & 0 \\ - 2 & 1 & 0 & 0 \\ 38 & -2 & 1 & 0 \\ - 348 & 38 & -2 & 1 \\ 679 & -348 & 38 & -2 \ end {array}} \ right] \ cdot \ left [{\ begin {array} {r} 1 \\ 0 \\ - 47 \\ - 120 \ end {array}} \ right] & = \ left [{\ begin {array} {r} 1 \\ - 2 \\ - 9 \\ - 374 \\ - 867 \ end {array}} \ right] \ end {aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ddba70b93aeb1064c810a38cdbb8e7fbffda172)
- From this you can read off the end result sought:
![{\ displaystyle p_ {4} = \ lambda ^ {4} -2 \ lambda ^ {3} -9 \ lambda ^ {2} -374 \ lambda -867 \;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2e6539f4601bbd39c216e1ab1d4128ad2679b82b)
- In particular one obtains for the determinant of
![{\ displaystyle p_ {4} (0) = \ det (-A) = (- 1) ^ {4} \ cdot \ det (A) = - 867 \; \ Rightarrow \ det (A) = - 867}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c9c5da37babc25e872d3c544dd249c777625639c)
literature
- J. Abdeljaoued, The Berkowitz algorithm, Maple and computing the characteristic polynomial in an arbitrary commutative ring , MapleTech Vol. 4, no. 3, pp. 21-32, Birkhäuser Boston Basel Berlin, 1997
- Stuart J. Berkowitz: On computing the determinant in small parallel time using a small number of processors , Information Processing Letters, 18, pp. 147-150, 1985, doi : 10.1016 / 0020-0190 (84) 90018-8
- G. Nakos and Robert M. Williams: A Fast Computation of the Characteristic polynomial , Mathematica in Education and Research, Vol. 9, No. 1, 2000
-
Paul A. Samuelson : A method for determining explicitly the characteristic equation , Annals of Mathematical Statistics, 13, pp. 424-429, 1942, doi : 10.1214 / aoms / 1177731540
- Günter Rote: Division-free algorithms for the determinant and the Pfaffian: algebraic and combinatorial approaches , in: Computational Discrete Mathematics , Editor: Helmut Alt, Lecture Notes in Computer Science 2122, Springer-Verlag, 2001, pp. 119-135, online version (PDF; 250 kB)
- Michael Soltys: Berkowitz's Algorithm and Clow Sequences , The Electronic Journal of Linear Algebra (ELA), ISSN 1081-3810 , Volume 9, pp. 42-54, April 2002, online version (PDF; 168 kB)
Individual evidence
-
↑ a b c Michael Soltys: Berkowitz's Algorithm and Clow Sequences , The Electronic Journal of Linear Algebra (ELA), ISSN 1081-3810 , Volume 9, pp. 42-54, April 2002, online version (PDF; 168 kB)
-
↑ a b c d J. Abdeljaoued, The Berkowitz algorithm, Maple and computing the characteristic polynomial in an arbitrary commutative ring , MapleTech Vol. 4, No. 3, pp. 21-32, Birkhäuser Boston Basel Berlin, 1997
-
↑ a b c d Stuart J. Berkowitz: On computing the determinant in small parallel time using a small number of processors , Information Processing Letters, 18, pp. 147-150, 1985, doi : 10.1016 / 0020-0190 (84) 90018-8
-
^ A b G. Nakos and Robert M. Williams: A Fast Computation of the Characteristic polynomial , Mathematica in Education and Research, Vol. 9, No. 1, 2000
-
^ Paul A. Samuelson: A method for determining explicitly the characteristic equation , Annals of Mathematical Statistics, 13, pp. 424-429, 1942, doi: 10.1214 / aoms / 1177731540