Absorbent state

from Wikipedia, the free encyclopedia

Absorbing states are a term from the theory of Markov chains , which in turn are special stochastic processes in probability theory . Those states are called absorbent which can never be left again after entering.

definition

Given is a homogeneous Markov chain in discrete time with at most a countable state space . Then a subset is called absorbing, if and only if

applies. Is one element, so

,

then the state is called the absorbing state. Continue to be called

Time of first entry into the absorbing quantity and

the probability of absorption.

Calculation of the absorption probabilities

Let a homogeneous Markov chain with state space and several absorbing sets be given. To calculate the absorption probabilities in an absorbing quantity , the following approach is suitable : combine all absorbing quantities except in the quantity and consider for all

In the case of a finite state space, this corresponds exactly to a solution of the right eigenvector problem for the eigenvalue 1 of the transition matrix . In general, the solution is not clear. This is achieved by adding the boundary conditions if (when starting in the absorbing state, absorption has already occurred) and if (when starting in a different absorbing state, this can never be achieved).

If one now wants to know in which state the absorption occurs, considerations analogous to the above lead to the additional secondary condition if and if .

Examples

Consider the following transition matrix, which represents a homogeneous Markov chain on the state space. There are two absorbent quantities: the condition and the quantity . State 1 is only reached by state 2 with probability 0.1, states 4 and 5 only by states 2 (with probability 0.1) or 3 (with probability 0.5).

The eigenspace is then We are interested in the absorption probability in state 1. Therefore we are now looking for a vector that satisfies and . The second vector in the span fulfills the second condition, the first is obtained by normalizing the vector. This means that the vector is the i-th component that contains the absorption probability when starting in i in state 1. If one now wants to calculate the absorption probability in , one must demand and . It then follows . It is nice to see here that the absorption probabilities add up to one, that means no matter where you start, you are always absorbed either in state 1 or in states 4 and 5.

If one is interested in the absorption probability in state 4, the direct application of the system of equations and the secondary conditions yields and

This gives the solution .

use

Absorbent states are particularly interesting in population dynamics and in the modeling of games. In population dynamics, the time steps of a Markov chain can be understood as generations, the state space is then the living individuals. Here an absorbing state would be the 0, i.e. the fact that the species has become extinct. In this case, the probability of extinction and the probability of absorption are equivalent in the 0. A typical example here is the Galton-Watson trial . Some games of chance can also be studied well with the help of Markov chains. The time of the game round and the state space correspond to the money of a player. An absorbing state would be 0 (bankruptcy of the player) or the total stake (bankruptcy of the other player). So the absorption probability is the profit / loss probability.

literature

  • Ulrich Krengel : Introduction to probability theory and statistics . 8th edition, Vieweg, 2005. ISBN 978-3-834-80063-3
  • Hans-Otto Georgii: Stochastics: Introduction to Probability Theory and Statistics , 4th edition, de Gruyter, 2009. ISBN 978-3-110-21526-7
  • Christian Hesse: Applied probability theory : a well-founded introduction with over 500 realistic examples and tasks, Vieweg, Braunschweig / Wiesbaden 2003, ISBN 978-3-528-03183-1 .