Lyapunov exponent

from Wikipedia, the free encyclopedia

The Lyapunov exponent of a dynamic system (after Alexander Michailowitsch Lyapunow ) describes the speed with which two (close together) points in phase space move away from each other or approach (depending on the sign ). There is a Lyapunov exponent per dimension of the phase space, which together form the so-called Lyapunov spectrum. Often, however, only the largest Lyapunov exponent is considered, as this usually determines the entire system behavior.

If one considers general trajectory courses in phase space, then the exponents provide a measure of the rate of separation from an original trajectory . With regard to a time-continuous consideration of a dynamic system, this relationship can be formally represented generally as :, where represents the linearization of the trajectory at the point in time .

definition

Illustration of the Lyapunov exponent . The two starting points move away from each other exponentially.

In one-dimensional the Lyapunov exponent of an iterated mapping is defined as follows:

.

With the help of the chain rule of differential calculus, the following is obtained for the differential quotient in this definition:

.

This means that the Lyapunov exponent can also be defined differently:

.

The derivation of the exponent follows from the figure given. After applies iterations . The definition is obtained by converting to and forming the limit values ​​of this expression for and .

properties

  • If the largest Lyapunov exponent is positive, the system is usually divergent.
  • If it is negative, this corresponds to a phase space contraction, i.e. H. the system is dissipative and acts stationary or periodically stable.
  • If it is zero, points originally close together in phase space no longer diverge exponentially, as in the case of the positive largest exponent, but only polynomially.
  • If the sum of the Lyapunov exponents is zero, it is a conservative system . In addition, the important peculiarity then applies that an inverse consideration of the respective process (inverse time development or inverse mapping sequence) also leads to an inverse spectrum of the exponents, i.e. a certain symmetry is also maintained in relation to these: where the total number of Lyapunov- Is the exponent of the system. This applies provided that the corresponding system is really reversible.
  • The spectrum of the exponents can be degenerate. In this case, exponents occur with an individual multiple, of which at least one is> 1.
  • The total number of Lyapunov exponents, without considering their respective multiplicity, of a system corresponds to the number of degrees of freedom of the system under consideration.
  • The exponents are always metric- invariant.
  • For sufficiently large times or iterations considered, the exponents are independent of the choice of or . This aspect plays a special role for the validity of the exponents in connection with simulations.

Theorem of Oseledets

An important milestone in the theory of nonlinear dynamic systems was the theorem of Valery Oseledets published in 1965, which was also proven in the same year by the same and a year later in a different context by MS Raghunathan . The sentence, which actually consists of several sentences, makes, among other things, important statements about the existence of Lyapunov exponents in relation to a large class of nonlinear dynamic systems. Until the theorem was published, there was only speculation about this reference and the determination of the exponents was only possible for simple (iterative) mappings.

With regard to the Lyapunov exponents, the following relationship from the Oseledets theorem plays a major role ( differential geometric version):

Let an ergodic dynamical system be given, which is defined on a Riemannian manifold with a specified metric. Furthermore, let the evolution operator of the dynamic system in the corresponding tangent , which is usually at discrete-time viewing by a fundamental matrix is representable.

Then, in a time-continuous representation, for almost every non-zero vector , the dimension of the Riemann manifold is:

.

Lyapunov vectors and the calculation of exponents

The ascertainable scalars from Oseledet's theorem correspond by definition to Lyapunov's exponents, but it is not immediately apparent how the spectrum of the (characteristic) Lyapunov's exponents of a system can be determined from this law, since the choice of vectors to evolve is related to and through this law is not specified in more detail. In and with other points of his theorem, Oseledet was able to show that the total amount of evolvable vectors in relation to the system development process spans a nested subspace structure:

,

where the Lyapunov exponents represent the growth or shrinkage rates (again depending on the sign) of the volumes of these subspaces. In the further course it is important that these subspaces can be clearly described by an orthogonal system.

Method by Benettin et al.

A tried and tested method of calculating the exponents based on this was used early on by Benettin et al. suggested. An algorithm was formulated which, when viewed in discrete time, determines the exponents in a statistical manner by developing so-called Gram-Schmidt vectors: In the case of discrete time, the following applies to the fundamental matrix:

,

where represents a discrete time interval, is a discrete global time or iteration step and represents the corresponding Jacobian matrix . If one develops an orthogonal system of vectors in this regard, then, according to the premise of Benettin et al., It must contain information about the Lyapunov exponents based on Oseledet's theorem if the time development is sufficiently large. In the way, however, stands the problematic fact that orthogonal systems developed in this way quickly lose their orthogonality in almost every case if the step size is sufficiently large. This has not only purely numerical reasons when calculating by means of a computer, but also reasons relating to Oseledet's theorem per se, since the subspaces converge towards their respective embedding spaces at an exponential rate. This means that the direction of the strongest change in subspace volume in the tangential space (determined by the largest Lyapunov exponent) becomes dominant. To prevent this, the respective fundamental matrix must be re-orthogonalized according to a fixed or dynamically determinable number of steps. The naming of Gram-Schmidt vectors is based precisely on the fact that a modified or iterative Gram-Schmidt method can be used for it. However, there are also other, numerically more stable methods such as the explicit QR decomposition or the Givens rotation . In the field of scientific computing, however, optimized iterative Gram-Schmidt methods have become firmly established because they can be parallelized particularly well and work with sufficient accuracy.

When applying this re-orthogonalization to , the QR decomposition is obtained , with the matrix representing the re-orthogonalized Gram-Schmidt vectors and an upper triangular matrix that contains the local growth factors of the respective subspaces on the main diagonal. Benettin et al. now use to identify the amount of so-called finite time Lyapunov exponents in relation to the considered time range up to the reorthogonalization to the time step . This is calculated from the natural logarithm of these local growth factors.

With you can now build a bridge to the “real” Lyapunov exponents of the system.

In summary, this means specifically for the algorithm:

  1. Initialize the system to be analyzed with the respective start parameters.
  2. Create a suitable identity matrix according to the number of degrees of freedom in the system. This forms the starting state of the Gram-Schmidt vectors.
  3. Choose a fixed rate of time steps, which determines the frequency of the re-orthogonalizations, or use an adaptive method. Orthogonalizations that are too frequent increase the computation time (QR methods all, with the exception of special case algorithms, have cubic runtimes). On the other hand, too long time intervals impair the numerical accuracy in this regard.
  4. Choose a sampling rate in terms of capturing the finite time Lyapunov exponents . This should always be an integral multiple of the orthogonalization rate.
  5. Iterate the system according to the respective time evolution equations, both in phase space and in tangent space (fundamental matrix and Gram-Schmidt vectors). If necessary, linearize these equations (usually non-linear differential equations of higher order). Re-orthogonalize according to the specified number of time steps and sample the finite time Lyapunov exponents .
  6. After reaching individual termination conditions, determine the actual Lyapunov exponents using the finite time Lyapunov exponents . A common and recommended approach is to begin with the actual calculation of the exponents only when the phase space trajectory converges to the ergodic attractor and the Gram-Schmidt vectors have reached a so-called statistically stationary state, about the possible existence of which Oseledet's theorem also makes statements.

It should be emphasized that the Benettin et al. is only one possible among many and other methods may be more suitable for special problem cases and systems. Furthermore, depending on the system and boundary conditions, it must be specified from when in the time development the calculation of the exponents should actually be started and how many total time steps are necessary to obtain statistically sufficiently good data.

Lyapunov vectors

With regard to this general approach, several vector types can be classified, which are generally, but not always consistent, subsumed under the term Lyapunov vectors in the specialist field and have a more or less strong relation to the exponents. What all these vectors have in common is that they are tangent vectors of the respective system. These are as follows:

  • Gram-Schmidt vectors: These are the time-local orthogonal tangential vectors that occur, for example, in the Benettin algorithm. They are not covariant to the tangential space dynamics and generally do not show any symmetries when considering the inverse process in reversible systems. It is currently (as of April 2016) not known whether their (physical) meaning goes beyond that of a methodological aid.
  • Backward Lyapunov vectors: Oseledet's theorem proves the existence of a so-called stationary state of the Gram-Schmidt vectors with sufficient time evolution, as soon as the ergodic attractor has been reached by phase space expansion. The (Gram-Schmidt) vectors that represent this state are the actual starting point for the calculation of the exponents according to Benettin et al. It must be noted that this steady state does not imply that the corresponding Gram-Schmidt vectors cannot continue to fluctuate.
  • Forward Lyapunov vectors: These are the equivalent of the backward Lyapunov vectors with the inverse time evolution direction of the system and play a role in the calculation of the covariant Lyapunov vectors.
  • Covariant Lyapunov vectors: The existence of these vectors has been known for a relatively long time, but until 2007 there was no algorithm capable of calculating them. Since these can reveal or carry significantly more information about the stability of dynamic systems and generally about tangential space dynamics than, for example, the already extensively used Gram-Schmidt vectors, their more intensive research and use is currently special in the area of ​​dynamic systems as of April 2016 strong to watch. The relationship to the Lyapunov exponents is such that the exponents directly represent the growth or shrinkage behavior of these vectors: where again is the fundamental matrix and a covariant Lyapunov vector.

Meaning of the Lyapunov exponent

Kaplan-Yorke conjecture

The Kaplan-Yorke conjecture provides an estimate for the upper limit of the information dimension with the help of the Lyapunov spectrum. This so-called Kaplan-Yorke dimension is defined as follows:

,

where is the largest natural number for which the sum remains positive.

Lyapunov period

The inverse of the largest Lyapunov exponent, the so-called Lyapunov time or the mean prediction time , is the time for which meaningful predictions about the system behavior can be made.

literature

  • Heinz Georg Schuster : Deterministic Chaos. An introduction . 3. Edition. VCH Verlagsgesellschaft, Weinheim 1994, ISBN 3-527-29089-3
  • H. Kantz, T. Schreiber: Nonlinear Time Series Analysis . Cambridge University Press, Cambridge 2004, ISBN 0-521-52902-6
  • B. Hasselblatt, A. Katok: Introduction to the modern theory of dynamical systems . Part of Encyclopedia of Mathematics and its Applications . Cambridge 1997, ISBN 978-0-521-57557-7
  • David Ruelle: Ergodic theory on differentiable dynamical systems . In: IHES Publicationes Mathematiques , 50, 1979, pp. 275-320
  • Francesco Ginelli, Hugues Chaté, Roberto Livi, Antonio Politi: Covariant Lyapunov vectors . In: Journal of physics A: Mathematical and theoretical , Volume 46, Number 25, 04/2013
  • G. Benettin, L. Galgani, JM Strelcyn: Kolmogorov entropy and numerical experiments . In: Phys. Rev. A , 14, 1976, p. 2338
  • Govindan Rangarajan, Salman Habib, Robert Ryne: Lyapunov Exponents without Rescaling and Reorthogonalization . 1998, doi: 10.1103 / PhysRevLett.80.3747
  • Günter Radons, Gudula Rünger, Michael Schwind, Hong-liu Yang: Parallel Algorithms for the Determination of Lyapunov Characteristics of Large Nonlinear Dynamical Systems . In: PARA , 2004, pp. 1131-1140
  • Harald A Posch: Symmetry properties of orthogonal and covariant Lyapunov vectors and their exponents . 2011, arxiv : 1107.4032
  • Harald A. Posch, R. Hirschl: Simulation of billiards and of hard body fluids in hard ball systems and the Lorentz gas . Springer, Berlin 2000

Web links

  • Amie Wilkinson : What are Lyapunov exponents and why are they interesting? In: Bulletin of the American Mathematical Society , 2017, arxiv : 1608.02843

Individual evidence

  1. ^ Heinz Georg Schuster : Deterministic Chaos - An Introduction . 3. Edition. VCH Verlagsgesellschaft, Weinheim 1994, ISBN 3-527-29089-3 , p. 24 f.