# Modified Hessian matrix

The rimmed Hessian matrix (engl. Bordered Hessian ) is used for classification of stationary points in the multidimensional extreme problems with constraints. It is related to the "normal" Hesse matrix . In contrast to the Hessian matrix, which is examined for positive or negative definiteness , the sign of the determinant is decisive for the modified Hessian matrix .

The decisive factor is the sequence of signs of the leading major minors , whereby it applies that only the k leading major minors are examined for which the following applies: (m number of constraints) For example, if you examine a function for variables with a secondary condition, you have to consider, i.e. first the signs from the 3rd leading major minor (see also the following example). ${\ displaystyle k> 2m}$${\ displaystyle k> 2 \ cdot 1 \ rightarrow k> 2}$

Be open The function is twice continuously differentiable and it has a local extreme under the constraints with . Be now ${\ displaystyle U \ subset \ mathbb {R} ^ {n}}$${\ displaystyle f: U \ rightarrow \ mathbb {R}}$${\ displaystyle a \ in U}$${\ displaystyle F = (F_ {1}, \ ldots, \, F_ {m}): U \ rightarrow \ mathbb {R} ^ {m}}$${\ displaystyle m

${\ displaystyle L (\ lambda _ {1}, \ ldots, \, \ lambda _ {m}, \, x): = f (x) - \ sum _ {i = 1} ^ {m} \ lambda _ {i} F_ {i} (x)}$

the Lagrange function, which is short for . Then the rimmed Hessian matrix is ​​understood to be the matrix ${\ displaystyle x}$${\ displaystyle x_ {1}, \ ldots, \, x_ {n}}$${\ displaystyle (n + m) \ times (n + m)}$

{\ displaystyle {\ begin {aligned} \ operatorname {\ overline {H}} ({\ bar {\ lambda}}, \, a) &: = \ left ({\ begin {array} {cccccc} {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {1} ^ {2}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {1} \ partial \ lambda _ {m}}} & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {1} \ partial x_ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {1} \ partial x_ {n}}} \\\ vdots & \ ddots & \ vdots & \ vdots && \ vdots \\ {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {m} \ partial \ lambda _ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {m} ^ {2} }} & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {m} \ partial x_ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial \ lambda _ {m} \ partial x_ {n}}} \\ {\ frac {\ partial ^ {2} L} {\ partial x_ {1} \ partial \ lambda _ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} \ partial \ lambda _ {m}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} ^ {2}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} \ partial x_ {n}}} \\\ vdots && \ vdots & \ vdots & \ ddots & \ vdots \\ {\ frac {\ partial ^ {2} L} {\ partial x_ { n} \ partial \ lambda _ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {n} \ partial \ lambda _ {m}}} & {\ frac { \ partial ^ {2} L} {\ partial x_ {n} \ partial x_ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {n} ^ {2} }} \ end {array}} \ right) ({\ bar {\ lambda}}, \, a) \\\ end {aligned}}}

{\ displaystyle {\ begin {aligned} \ operatorname {\ overline {H}} ({\ bar {\ lambda}}, \, a) &: = \ left ({\ begin {array} {cccccc} 0 & \ ldots & 0 & - {\ frac {\ partial F_ {1}} {\ partial x_ {1}}} & \ ldots & - {\ frac {\ partial F_ {1}} {\ partial x_ {n}}} \\\ vdots & \ ddots & \ vdots & \ vdots && \ vdots \\ 0 & \ ldots & 0 & - {\ frac {\ partial F_ {m}} {\ partial x_ {1}}} & \ ldots & - {\ frac {\ partial F_ {m}} {\ partial x_ {n}}} \\ - {\ frac {\ partial F_ {1}} {\ partial x_ {1}}} & \ ldots & - {\ frac {\ partial F_ {m}} {\ partial x_ {1}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} ^ {2}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} \ partial x_ {n}}} \\\ vdots && \ vdots & \ vdots & \ ddots & \ vdots \\ - {\ frac {\ partial F_ {1}} {\ partial x_ {n}}} & \ ldots & - {\ frac {\ partial F_ {m}} {\ partial x_ {n}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {n} \ partial x_ {1}}} & \ ldots & {\ frac {\ partial ^ {2} L} {\ partial x_ {n} ^ {2}}} \ end {array}} \ right) ({\ bar {\ lambda}}, \, a) \ end {aligned}}}

with the associated solutions of the auxiliary variables. ${\ displaystyle {\ bar {\ lambda}} = {\ bar {\ lambda}} _ {1}, \ ldots, \, {\ bar {\ lambda}} _ {m}}$

## Shape (2-dimensional case)

For a two-dimensional function with a secondary condition, the modified Hessian matrix has the following form.

Let be the Lagrangian , where is any two-dimensional function and the constraint under which optimization should be performed. ${\ displaystyle L (x_ {1}, x_ {2}) = f (x_ {1}, x_ {2}) + \ lambda g (x_ {1}, x_ {2})}$${\ displaystyle f: \ mathbb {R} ^ {2} \ rightarrow \ mathbb {R}, (x_ {1}, x_ {2}) \ mapsto f (x_ {1}, x_ {2})}$${\ displaystyle g (x_ {1}, x_ {2}) = 0 \,}$

${\ displaystyle \ operatorname {\ bar {H}} (x) = {\ begin {pmatrix} 0 & g_ {x1} & g_ {x2} \\ g_ {x1} & L_ {x1x1} & L_ {x1x2} \\ g_ {x2} & L_ {x2x1} & L_ {x2x2} \\\ end {pmatrix}} = {\ begin {pmatrix} 0 & {\ frac {\ partial g} {\ partial x_ {1}}} & {\ frac {\ partial g} {\ partial x_ {2}}} \\ [1.5ex] {\ frac {\ partial g} {\ partial x_ {1}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ { 1} ^ {2}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {1} \ partial x_ {2}}} \\ [1.5ex] {\ frac {\ partial g} {\ partial x_ {2}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {2} \ partial x_ {1}}} & {\ frac {\ partial ^ {2} L} {\ partial x_ {2} ^ {2}}} \\\ end {pmatrix}}}$

The one at the top left in the matrix comes about through . ${\ displaystyle 0}$${\ displaystyle \ operatorname {\ bar {H}} _ {11} = {\ frac {\ partial ^ {2} L} {\ partial \ lambda ^ {2}}}}$

A stationary position of is then subject to the secondary condition${\ displaystyle x_ {0}}$${\ displaystyle f}$${\ displaystyle g}$

• local maximum if${\ displaystyle \ det {\ bar {H}} (x_ {0})> 0}$
• local minimum if${\ displaystyle \ det {\ bar {H}} (x_ {0}) <0}$
• undecidable if${\ displaystyle \ det {\ bar {H}} (x_ {0}) = 0}$