# Root system

Root systems are used in mathematics as an aid to classifying the finite reflection groups and the finite dimensional, semi-simple complex Lie algebras .

## Definitions

A subset of a vector space over a field of characteristic 0 is called a root system if it fulfills the following conditions: ${\ displaystyle R}$ ${\ displaystyle V}$ ${\ displaystyle K}$

1. ${\ displaystyle R}$ is finite.
2. ${\ displaystyle R}$is a linear generating system of .${\ displaystyle V}$
3. For each from there is a linear form with the following properties: ${\ displaystyle \ alpha}$${\ displaystyle R}$ ${\ displaystyle \ alpha ^ {\ vee} \ in V ^ {*}}$
• For is .${\ displaystyle \ beta \ in R}$${\ displaystyle \ alpha ^ {\ vee} (\ beta) \ in \ mathbb {Z}}$
• ${\ displaystyle \ alpha ^ {\ vee} (\ alpha) = 2}$
• The linear mapping with maps on .${\ displaystyle s _ {\ alpha} \ colon V \ to V}$${\ displaystyle s _ {\ alpha} (x) = x- \ alpha ^ {\ vee} (x) \ cdot \ alpha}$${\ displaystyle R}$${\ displaystyle R}$

The elements of a root system are called roots.

A reduced root system is present if this also applies

4. If two roots are linearly dependent, then we have${\ displaystyle \ alpha, \ beta}$${\ displaystyle \ alpha = \ pm \ beta}$

One can show that the linear form from 3. is unique for each . It is called the co-root too ; the name is justified by the fact that the kowroots form a root system in the dual space . The image is a reflection and of course also clearly defined. ${\ displaystyle \ alpha ^ {\ vee}}$${\ displaystyle \ alpha \ in R}$${\ displaystyle \ alpha}$ ${\ displaystyle V ^ {*}}$${\ displaystyle s _ {\ alpha}}$

If and are two roots with , then one can show that also applies, and one calls and orthogonal to each other. If the root system can be written as a union of two non-empty subsets in such a way that every root in is orthogonal to every root in , then the root system is called reducible . In this case it can also be decomposed into a direct sum , so that and are root systems. If, on the other hand, a non-empty root system is not reducible, it is called irreducible . ${\ displaystyle \ alpha}$${\ displaystyle \ beta}$${\ displaystyle \ alpha ^ {\ vee} (\ beta) = 0}$${\ displaystyle \ beta ^ {\ vee} (\ alpha) = 0}$${\ displaystyle \ alpha}$${\ displaystyle \ beta}$ ${\ displaystyle R = R_ {1} \ cup R_ {2}}$${\ displaystyle R_ {1}}$${\ displaystyle R_ {2}}$${\ displaystyle V}$${\ displaystyle V_ {1} \ oplus V_ {2}}$${\ displaystyle R_ {1} \ subseteq V_ {1}}$${\ displaystyle R_ {2} \ subseteq V_ {2}}$

The dimension of the vector space is called the rank of the root system. A subset of a root system is called a base if is a base of and every element of can be represented as an integral linear combination of elements of with exclusively positive or exclusively negative coefficients. ${\ displaystyle V}$${\ displaystyle \ Pi}$${\ displaystyle R}$${\ displaystyle \ Pi}$${\ displaystyle V}$${\ displaystyle R}$${\ displaystyle \ Pi}$

Two root systems and are accurate then another isomorphic if there is a Vektorraumisomorphismus with there. ${\ displaystyle R \ subset V}$${\ displaystyle R '\ subset V'}$${\ displaystyle \ varphi \ colon V \ to V '}$${\ displaystyle \ varphi (R) = R '}$

## Scalar product

One can define a scalar product with respect to which the images are reflections. In the reducible case, this can be put together from scalar products on the components. If, however, is irreducible, then this scalar product is unique up to one factor. You can normalize this so that the shortest roots have the length 1. ${\ displaystyle V}$${\ displaystyle s _ {\ alpha}}$${\ displaystyle R}$

In principle, one can therefore assume that a root system “lives” in one (mostly ) with its standard scalar product. The integer of and then means a considerable restriction on the possible angles between two roots and . Because it results from ${\ displaystyle K ^ {n}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ alpha ^ {\ vee} (\ beta)}$${\ displaystyle \ beta ^ {\ vee} (\ alpha)}$${\ displaystyle \ alpha}$${\ displaystyle \ beta}$

${\ displaystyle \ langle \ alpha, \ beta \ rangle = {\ sqrt {\ langle \ alpha, \ alpha \ rangle \ langle \ beta, \ beta \ rangle}} \ cdot \ cos \ measuredangle (\ alpha, \ beta) ,}$

that must be an integer. Again, this is only the case for the angles 0 °, 30 °, 45 °, 60 °, 90 °, 120 °, 135 °, 150 °, 180 °. Only the angles 90 °, 120 °, 135 °, 150 ° are possible between two different roots of a base. All these angles actually occur, cf. the examples of rank 2. It also emerges that only a few values ​​are possible for the length ratio of two roots in the same irreducible component. ${\ displaystyle 4 \ cos ^ {2} \ measuredangle (\ alpha, \ beta)}$

## Weyl group

The subgroup of the automorphism group of , which is generated by the amount of reflections , is called the Weyl group (after Hermann Weyl ) and is generally referred to as. With regard to the defined scalar product, all elements of the Weyl group are orthogonal, they are reflections . ${\ displaystyle V}$${\ displaystyle \ {s _ {\ alpha} | \ alpha \ in R \}}$${\ displaystyle W}$${\ displaystyle s _ {\ alpha}}$

The group operates faithful to and therefore is always finite. Furthermore, operates transitively on the set of bases of . ${\ displaystyle W}$ ${\ displaystyle R}$${\ displaystyle W}$ ${\ displaystyle R}$

In the case , the levels of reflection divide the space into half-spaces, a total of several open convex subsets, the so-called Weyl chambers . This also operates transitively. ${\ displaystyle K = \ mathbb {R}}$${\ displaystyle s _ {\ alpha}}$${\ displaystyle W}$

## Positive roots, simple roots

After choosing a Weyl chamber , the set of positive roots can be defined by ${\ displaystyle {\ mathfrak {a}} ^ {+}}$

${\ displaystyle R ^ {+}: = \ left \ {\ alpha \ in R: \ alpha ^ {\ vee} (x)> 0 \ \ forall x \ in {\ mathfrak {a}} ^ {+} \ right \}}$.

This defines an arrangement on through ${\ displaystyle R}$

${\ displaystyle \ alpha> \ beta \ Longleftrightarrow \ alpha - \ beta \ in \ mathbb {Z} _ {\ geq 0} R ^ {+}}$.

The positive and negative roots are those with and . (Note that this definition depends on the choice of the Weyl chamber. An arrangement is given for each Weyl chamber.) ${\ displaystyle \ alpha> 0}$${\ displaystyle \ alpha <0}$

A simple root is a positive root that cannot be broken down as the sum of several positive roots.

The simple roots form a base of . Every positive (negative) root can be decomposed as a linear combination of simple roots with non-negative ( non-positive ) coefficients. ${\ displaystyle V}$

## Examples

The empty set is the only root system of rank 0 and is also the only root system that is neither reducible nor irreducible.

Except for isomorphism, there is only one reduced root system of rank 1. It consists of two roots that differ from 0 and is denoted by. If one also considers non-reduced root systems, the only other example is of rank 1. ${\ displaystyle \ {\ alpha, - \ alpha \}}$${\ displaystyle A_ {1}}$${\ displaystyle \ {- 2 \ alpha, - \ alpha, \ alpha, 2 \ alpha \}}$

With the exception of isomorphism, all reduced root systems of rank 2 have one of the following forms. is a basis of the root system. ${\ displaystyle (\ alpha, \ beta)}$

 Root system A 1 × A 1 Root system A 2 Root system B 2 Root system G 2

In the first example ,, the ratio of the lengths of and is arbitrary, in the other cases, however, it is uniquely determined by the geometric conditions. ${\ displaystyle A_ {1} \ times A_ {1}}$${\ displaystyle \ alpha}$${\ displaystyle \ beta}$

## classification

Except for isomorphism , all information about a reduced root system is in its Cartan matrix${\ displaystyle R}$

${\ displaystyle C (R) = (\ beta ^ {\ vee} (\ alpha)) _ {\ alpha, \ beta \ in \ Pi}}$

contain. This can also be shown in the form of a Dynkin diagram . To do this, one sets a point for each element of a base and connects the points α and β with lines, the number of which through

${\ displaystyle \ beta ^ {\ vee} (\ alpha) \ alpha ^ {\ vee} (\ beta)}$

is determined. If there are more than one, a relation symbol> or <is also placed between the two points, i.e. H. an 'arrow' in the direction of the shorter root. The connected components of the Dynkin diagram correspond exactly to the irreducible components of the root system. As a diagram of an irreducible root system can only appear:

The index indicates the rank and thus the number of points in the diagram. Several identities for cases of lower order can be read from the Dynkin diagrams, namely: ${\ displaystyle n}$

• ${\ displaystyle A_ {1} = B_ {1} = C_ {1}}$
• ${\ displaystyle B_ {2} = C_ {2}}$
• ${\ displaystyle A_ {3} = D_ {3}}$

That is why, for example, it only shows and only forms a separate class. The root systems belonging to series to are also referred to as classic root systems, the remaining five as exceptional or exceptional root systems . All the root systems mentioned also occur, for example, as the root system of semi-simple complex Lie algebras. ${\ displaystyle B}$${\ displaystyle n = 2}$${\ displaystyle D}$${\ displaystyle n = 4}$${\ displaystyle A_ {n}}$${\ displaystyle D_ {n}}$

### Not reduced root systems

For irreducible, unreduced root systems there are only a few possibilities that can be thought of as the union of one with one (with ) or as one in which for every short root its double has been added. ${\ displaystyle B_ {n}}$${\ displaystyle C_ {n}}$${\ displaystyle n \ geq 1}$${\ displaystyle B_ {n}}$

## Other uses

### Lie algebras

Let it be a finite-dimensional semi-simple Lie algebra and a Cartan subalgebra . Then a root is called if ${\ displaystyle {\ mathfrak {g}}}$${\ displaystyle {\ mathfrak {a}} \ subset {\ mathfrak {g}}}$${\ displaystyle \ alpha \ in {\ mathfrak {a}}}$

${\ displaystyle {\ mathfrak {g}} _ {\ alpha}: = \ left \ {Y \ in {\ mathfrak {g}}: \ left [X, Y \ right] = \ alpha ^ {\ vee} ( X) Y \ \ forall X \ in {\ mathfrak {a}} \ right \} \ not = \ left \ {0 \ right \}}$

is. Here, the means of -killing form by ${\ displaystyle \ alpha ^ {\ vee} \ in {\ mathfrak {a}} ^ {*}}$ ${\ displaystyle B}$

${\ displaystyle \ alpha ^ {\ vee} (X) = 2 {\ frac {B (\ alpha, X)} {B (\ alpha, \ alpha)}} \ \ forall X \ in {\ mathfrak {a} }}$

defined linear mapping.

Let be the set of roots, then it can be shown that ${\ displaystyle R \ subset {\ mathfrak {a}}}$

${\ displaystyle ({\ mathfrak {a}}, R)}$

is a root system.

#### properties

This root system has the following properties:

1. ${\ displaystyle {\ mathfrak {a}} (\ mathbb {R}): = \ left \ {a \ in {\ mathfrak {a}}: \ alpha ^ {\ vee} (a) \ in \ mathbb {R } \ \ forall \ \ alpha \ in R \ right \}}$is a real form of .${\ displaystyle {\ mathfrak {a}}}$
2. For true if and only if .${\ displaystyle \ alpha \ in R}$${\ displaystyle n \ alpha \ in R}$${\ displaystyle n = \ pm 1}$
3. For everyone is .${\ displaystyle \ alpha \ in R}$${\ displaystyle \ operatorname {dim} ({\ mathfrak {g}} _ {\ alpha}) = 1}$
4. For all is , in particular .${\ displaystyle \ alpha, \ beta \ in R}$${\ displaystyle {\ mathfrak {g}} _ {\ alpha + \ beta} = \ left [{\ mathfrak {g}} _ {\ alpha}, {\ mathfrak {g}} _ {\ beta} \ right] }$${\ displaystyle \ left [{\ mathfrak {g}} _ {\ alpha}, {\ mathfrak {g}} _ {- \ alpha} \ right] \ subset {\ mathfrak {a}}}$
5. ${\ displaystyle {\ mathfrak {g}} _ {\ alpha}, {\ mathfrak {g}} _ {- \ alpha}, \ left [{\ mathfrak {g}} _ {\ alpha}, {\ mathfrak { g}} _ {- \ alpha} \ right]}$span a Lie algebra that is isomorphic to the Lie algebra sl (2, C) .
6. For is , d. H. the root spaces are orthogonal with respect to the killing form . The restriction of the killing form to and is not degenerate. The restriction of the killing form to is real and positively definite.${\ displaystyle \ alpha \ not = \ pm \ beta}$${\ displaystyle B ({\ mathfrak {g}} _ {\ alpha}, {\ mathfrak {g}} _ {\ beta}) = 0}$${\ displaystyle {\ mathfrak {a}}}$${\ displaystyle {\ mathfrak {g}} _ {\ alpha} \ oplus {\ mathfrak {g}} _ {- \ alpha}}$${\ displaystyle {\ mathfrak {a}} (\ mathbb {R})}$

Finite-dimensional semi-simple complex Lie algebras are classified by their root systems, i.e. by their Dynkin diagrams.

#### example

Be it . The Killing form is a Cartan subalgebra is the algebra of diagonal matrices with trace 0, ie . We denote the diagonal matrix with -th diagonal entry and the other diagonal entries equal 0. ${\ displaystyle {\ mathfrak {g}} = sl (n, \ mathbb {R})}$${\ displaystyle B (X, Y) = 2nTr (XY)}$${\ displaystyle {\ mathfrak {a}}}$${\ displaystyle {\ mathfrak {a}} = \ left \ {\ operatorname {diag} (\ lambda _ {1}, \ ldots, \ lambda _ {n}): \ lambda _ {1} + \ ldots + \ lambda _ {n} = 0 \ right \}}$${\ displaystyle e_ {i}}$${\ displaystyle i}$${\ displaystyle \ lambda _ {i} = 1}$

The root system of is . Which is too dual form${\ displaystyle {\ mathfrak {a}}}$${\ displaystyle R = \ left \ {e_ {i} -e_ {j}: 1 \ leq i \ not = j \ leq n \ right \}}$${\ displaystyle \ alpha = e_ {i} -e_ {j}}$${\ displaystyle \ alpha ^ {\ vee} \ in {\ mathfrak {a}} ^ {*}}$

${\ displaystyle \ alpha ^ {\ vee} (\ operatorname {diag} (\ lambda _ {1}, \ ldots, \ lambda _ {n})) = \ lambda _ {i} - \ lambda _ {j}}$.

As a positive Weyl Chamber you can

${\ displaystyle {\ mathfrak {a}} ^ {+} = \ left \ {\ operatorname {diag} (\ lambda _ {1}, \ ldots, \ lambda _ {n}): \ lambda _ {1}> \ ldots> \ lambda _ {n}, \ lambda _ {1} + \ ldots + \ lambda _ {n} = 0 \ right \}}$

choose. The positive roots are then

${\ displaystyle R ^ {+} = \ left \ {e_ {i} -e_ {j}: 1 \ leq i .

The simple roots are

${\ displaystyle \ left \ {e_ {i} -e_ {i + 1}: 1 \ leq i \ leq n-1 \ right \}}$.

### Reflection groups

A Coxeter group is abstractly defined as a group with a presentation

${\ displaystyle \ left \ langle r_ {1}, r_ {2}, \ ldots, r_ {n} \ mid (r_ {i} r_ {j}) ^ {m_ {ij}} = 1 \ right \ rangle}$

with and for , as well as the convention if has infinite order , d. H. there is no relation of form . ${\ displaystyle m_ {ii} = 1}$${\ displaystyle m_ {ij} \ geq 2}$${\ displaystyle i \ neq j}$${\ displaystyle m_ {ij} = \ infty}$${\ displaystyle (r_ {i} r_ {j})}$${\ displaystyle (r_ {i} r_ {j}) ^ {m}}$

Coxeter groups are an abstraction of the concept of the mirror group .

Each Coxeter group corresponds to an undirected Dynkin diagram. The points in the diagram correspond to the producers . The and corresponding points are connected by edges. ${\ displaystyle r_ {1}, r_ {2}, \ ldots, r_ {n}}$${\ displaystyle r_ {i}}$${\ displaystyle r_ {j}}$${\ displaystyle m_ {ij}}$

### Singularities

According to Wladimir Arnold , elementary catastrophes can be classified using Dynkin diagrams of the ADE type:

• ${\ displaystyle A_ {0}}$- a non-singular point .${\ displaystyle V = x}$
• ${\ displaystyle A_ {1}}$- a local extremum, either a stable minimum or an unstable maximum .${\ displaystyle V = \ pm x ^ {2} + ax}$
• ${\ displaystyle A_ {2}}$ - the fold, fold
• ${\ displaystyle A_ {3}}$ - the tip, cusp
• ${\ displaystyle A_ {4}}$ - the swallowtail, swallowtail
• ${\ displaystyle A_ {5}}$ - the butterfly, butterfly
• ${\ displaystyle A_ {k}}$ - an infinite sequence of shapes in one variable ${\ displaystyle V = x ^ {k + 1} + \ cdots}$
• ${\ displaystyle D_ {4} ^ {-}}$- the elliptical umbilical catastrophe
• ${\ displaystyle D_ {4} ^ {+}}$ - the hyperbolic umbilical catastrophe
• ${\ displaystyle D_ {5}}$ - the parabolic umbilical catastrophe
• ${\ displaystyle D_ {k}}$ - an endless series of further umbilical disasters
• ${\ displaystyle E_ {6}}$ - the Umbilical catastrophe ${\ displaystyle V = x ^ {3} + y ^ {4} + axy ^ {2} + bxy + cx + dy}$
• ${\ displaystyle E_ {7}}$
• ${\ displaystyle E_ {8}}$

## Web links

Wikibooks: Full Proof of Classification  - Learning and Teaching Materials

## literature

• Jean-Pierre Serre: Complex Semisimple Lie Algebras , Springer, Berlin, 2001.