Skip to main content
Mathematics LibreTexts

19.3: Introduction to Markov Models

  • Page ID
    67888
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    from urllib.request import urlretrieve
    
    urlretrieve('https://raw.githubusercontent.com/colbrydi/jupytercheck/master/answercheck.py', 
                'answercheck.py');

    In probability theory, a Markov model is a stochastic model used to model randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it.

    A Markov model.
    A diagram representing a two-state Markov process, with the states labelled E and A. Via Wikipedia

    Each number represents the probability of the Markov process changing from one state to another state, with the direction indicated by the arrow. For example, if the Markov process is in state A, then the probability it changes to state E is 0.4, while the probability it remains in state A is 0.6.

    The above state model can be represented by a transition matrix.

    At each time step (\(t\)) the probability to move between states depends on the previous state (\(t−1\)):

    \[A_{t} = 0.6A_{(t-1)}+0.7E_{(t-1)} \nonumber \]

    \[E_{t} = 0.4A_{(t-1)}+0.3E_{(t-1)} \nonumber \]

    The above state model (\(S_t = [A_t, E_t]^T\)) can be represented in the following matrix notation:

    \[S_t = PS_{(t-1)} \nonumber \]

    Do This

    Create a \(2 \times 2\) matrix (P) representing the transition matrix for the above Markov space.

    #Put your answer to the above question here
    from answercheck import checkanswer
    
    checkanswer.matrix(P,'de1c99f4b4a8d7ea541a084d836ba7e4');

    This page titled 19.3: Introduction to Markov Models is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Dirk Colbry via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.