Markov model

from Wikipedia, the free encyclopedia
Racine carrée bleue.svg
This item has been on the quality assurance side of the portal mathematics entered. This is done in order to bring the quality of the mathematics articles to an acceptable level .

Please help fix the shortcomings in this article and please join the discussion !  ( Enter article )

In probability theory , a Markov model is a stochastic model that is used to model randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before (i.e. it takes on the Markov property ). In general, this assumption enables inferences and computational techniques that would otherwise be impossible. For this reason, in the areas of predictive modeling and probabilistic forecasting, it is desirable that a particular model exhibit the Markov property.

Markov chains

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes over time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of a previous state. An example of the use of a Markov chain is the MCMC method .

Hidden Markov Model

A Hidden Markov Model (German: hidden Markov model) is a Markov chain in which the state is only partially observable. In other words, the observations relate to the state of the system, but they are usually not sufficient to accurately determine the state. There are several known algorithms for hidden Markov models. For example, for a given observation sequence, the Viterbi algorithm calculates the most likely corresponding state sequence, the Forward algorithm calculates the probability of the observation sequence, and the Baum-Welch algorithm estimates the start probabilities, the transition function and the observation function of a hidden Markov model.

A common application is speech recognition where the observed data is the audio file (only spoken after data compression ) in wave form and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.