Recurrent Markov chain
The recurrence and the closely related concept of transience form the basis for the investigation of the long-term behavior of Markov chains . In particular, the question of how often a Markov chain returns to its starting state is examined here. If it does this infinitely often, it is called recurrent, otherwise transient. Recurrence is important for the existence of a stationary distribution and the convergence against it. Sometimes it is also of independent interest, such as B. with random walks .
The Green function is an important tool for investigating recurrence .
definition
A homogeneous Markov chain in discrete time and with a countable state space is given . Then
the waiting time at the start in until the first time (is called the return time ). Be now
the Ersteintrittswahrscheinlichkeit in so the probability when started in the state after exactly steps for the first time in the state to come. A state is called recurrent if
applies, so the state will almost certainly be visited again. If , the state is called transient. Is the state recurrent and applies to the expected return time
- ,
so the state is called positive recurrent . If the expected value is infinite, the state is called zero-recurrent . A Markov chain is called recurrent (transient) if every state is recurrent (transient).
properties
- If and are states communicating with one another , the following applies: If transient, then transient is also . The same also applies to zero-recurrent and positive-recurrent states.
- Transient states will almost certainly only be revisited at a finite number of times, recurrent states will almost certainly be revisited infinitely often.
- The following applies to the expected number of visits to a state when starting in
- .
- Here, the -step transition probability, i.e. the probability of being in the state in the -th step . It follows that the transience of a state is equivalent to , the recurrence is equivalent to . This often facilitates the determination of recurrence and transience, since only estimates are required and the -step transition probabilities are usually easier to determine than the initial probabilities.
- In the case of an irreducible Markov chain, the existence of a stationary distribution and the positive recurrence are equivalent. Furthermore, where is the stationary distribution.
- In the case of a finite state space, irreducibility already results in positive recurrence. Somewhat weakened, it follows that every essential state is positively recurrent.
- If the state space is at most countable, every recurrent state is essential.
Examples
Finite state space
Consider a homogeneous Markov chain on the state space with the transition matrix
- .
The states 1, 2 and 3 form a circle, which can only be exited from state 3 with a probability of 0.2. Once state 4 has been reached, it cannot be exited again. So 4 is an absorbing state . Let us now examine state 1 for recurrence or transience. Since 1 has no loop, the re-entry time is at least 2, possible paths are 1-2-1 and 1-3-1 with probabilities and respectively . Hence is . If the time step is straight, the following consideration helps: When starting in 1, you need a time step to reach 2 or 3. In order not to get back to 1 too early or to get caught in 4, laps between 2 and 3 must now be run until one time step before the re-entry time and only then is it returned to the state. So follows
- .
Similar considerations (with the difference that half laps are run here) result in odd re-entry times
- .
With the geometric series it then follows that
applies, state 1 is transient. It follows that states 2 and 3 are also transient. State 4 is recurrent as it can no longer be exited.
Countably infinite state space
As an example, let us consider a homogeneous Markov chain on the state space with transition probabilities
with . This is the random walk on . It is because the Markov chain has period two and is therefore always in an odd state when it starts at zero at an odd point in time. Since the number of steps is binomially distributed (see Random Walk), the following applies
according to the Stirling formula . This applies
- .
If the random walk is symmetrical, then the Markov chain is recurrent, otherwise it is transient.
literature
- Ulrich Krengel : Introduction to probability theory and statistics . 8th edition, Vieweg, 2005. ISBN 3-8348-0063-5 .
- Hans-Otto Georgii: Stochastics: Introduction to Probability Theory and Statistics , 4th edition, de Gruyter, 2009. ISBN 978-3-110-21526-7
- Christian Hesse: Applied probability theory. A well-founded introduction with over 500 realistic examples and exercises. Vieweg, Braunschweig / Wiesbaden 2003, ISBN 3-528-03183-2 .