Skip to main content
Mathematics LibreTexts

2.9E: An Application to Markov Chains Exercises

  • Page ID
    132808
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Exercises for 1

    solutions

    2

    Which of the following stochastic matrices is regular?

    \(\left[ \def\arraystretch{1.5}\begin{array}{ccc} 0 & 0 & \frac{1}{2} \\ 1 & 0 & \frac{1}{2} \\ 0 & 1 & 0 \end{array} \right]\) \(\left[ \def\arraystretch{1.5}\begin{array}{ccc} \frac{1}{2} & 0 & \frac{1}{3} \\ \frac{1}{4} & 1 & \frac{1}{3} \\ \frac{1}{4} & 0 & \frac{1}{3} \end{array} \right]\)

    1. Not regular

    In each case find the steady-state vector and, assuming that it starts in state 1, find the probability that it is in state 2 after \(3\) transitions.

    \(\left[ \begin{array}{cc} 0.5 & 0.3 \\ 0.5 & 0.7 \end{array} \right]\) \(\left[ \def\arraystretch{1.5}\begin{array}{cc} \frac{1}{2} & 1 \\ \frac{1}{2} & 0 \end{array} \right]\) \(\left[ \def\arraystretch{1.5}\begin{array}{rrr} 0 & \frac{1}{2} & \frac{1}{4} \\ 1 & 0 & \frac{1}{4} \\ 0 & \frac{1}{2} & \frac{1}{2} \end{array} \right]\) \(\left[ \begin{array}{rrr} 0.4 & 0.1 & 0.5 \\ 0.2 & 0.6 & 0.2 \\ 0.4 & 0.3 & 0.3 \end{array} \right]\) \(\left[ \begin{array}{rrr} 0.8 & 0.0 & 0.2 \\ 0.1 & 0.6 & 0.1 \\ 0.1 & 0.4 & 0.7 \end{array} \right]\) \(\left[ \begin{array}{rrr} 0.1 & 0.3 & 0.3 \\ 0.3 & 0.1 & 0.6 \\ 0.6 & 0.6 & 0.1 \end{array} \right]\)

    1. \(\frac{1}{3} \left[ \begin{array}{r} 2 \\ 1 \end{array} \right]\), \(\frac{3}{8}\)
    2. \(\frac{1}{3} \left[ \begin{array}{r} 1 \\ 1 \\ 1 \end{array} \right]\), \(0.312\)
    3. \(\frac{1}{20} \left[ \begin{array}{r} 5 \\ 7 \\ 8 \end{array} \right]\), \(0.306\)

    A fox hunts in three territories \(A\), \(B\), and \(C\). He never hunts in the same territory on two successive days. If he hunts in \(A\), then he hunts in \(C\) the next day. If he hunts in \(B\) or \(C\), he is twice as likely to hunt in \(A\) the next day as in the other territory.

    1. What proportion of his time does he spend in \(A\), in \(B\), and in \(C\)?
    2. If he hunts in \(A\) on Monday (\(C\) on Monday), what is the probability that he will hunt in \(B\) on Thursday?

    Assume that there are three social classes—upper, middle, and lower—and that social mobility behaves as follows:

    1. Of the children of upper-class parents, \(70\%\) remain upper-class, whereas \(10\%\) become middle-class and \(20\%\) become lower-class.
    2. Of the children of middle-class parents, \(80\%\) remain middle-class, whereas the others are evenly split between the upper class and the lower class.
    3. For the children of lower-class parents, \(60\%\) remain lower-class, whereas \(30\%\) become middle-class and \(10\%\) upper-class.
      1. Find the probability that the grandchild of lower-class parents becomes upper-class.
      2. Find the long-term breakdown of society into classes.
    1. \(50\%\) middle, \(25\%\) upper, \(25\%\) lower

    The prime minister says she will call an election. This gossip is passed from person to person with a probability \(p \neq 0\) that the information is passed incorrectly at any stage. Assume that when a person hears the gossip he or she passes it to one person who does not know. Find the long-term probability that a person will hear that there is going to be an election.

    John makes it to work on time one Monday out of four. On other work days his behaviour is as follows: If he is late one day, he is twice as likely to come to work on time the next day as to be late. If he is on time one day, he is as likely to be late as not the next day. Find the probability of his being late and that of his being on time Wednesdays.

    \(\frac{7}{16}\), \(\frac{9}{16}\)

    Suppose you have \(1\) and match coins with a friend. At each match you either win or lose \(1\) with equal probability. If you go broke or ever get \(4\), you quit. Assume your friend never quits. If the states are 0, 1, 2, 3, and 4 representing your wealth, show that the corresponding transition matrix \(P\) is not regular. Find the probability that you will go broke after \(3\) matches.

    A mouse is put into a maze of compartments, as in the diagram. Assume that he always leaves any compartment he enters and that he is equally likely to take any tunnel entry.

    1. If he starts in compartment 1, find the probability that he is in compartment 1 again after \(3\) moves.
    2. Find the compartment in which he spends most of his time if he is left for a long time.
    1. \(\frac{7}{75}\)
    2. He spends most of his time in compartment 3; steady state \(\frac{1}{16} \left[ \begin{array}{r} 3 \\ 2 \\ 5 \\ 4 \\ 2 \end{array} \right]\).

    If a stochastic matrix has a \(1\) on its main diagonal, show that it cannot be regular. Assume it is not \(1 \times 1\).

    If \(\mathbf{s}_{m}\) is the stage-\(m\) state vector for a Markov chain, show that \(\mathbf{s}_{m+k} = P^{k}\mathbf{s}_{m}\) holds for all \(m \geq 1\) and \(k \geq 1\) (where \(P\) is the transition matrix).

    A stochastic matrix is doubly stochastic if all the row sums also equal \(1\). Find the steady-state vector for a doubly stochastic matrix.

    Consider the \(2 \times 2\) stochastic matrix

    \(P = \left[ \begin{array}{cc} 1 - p & q \\ p & 1 - q \end{array} \right]\),
    where \(0 < p < 1\) and \(0 < q < 1\).

    1. Show that \(\frac{1}{p + q} \left[ \begin{array}{r} q \\ p \end{array} \right]\) is the steady-state vector for \(P\).
    2. Show that \(P^{m}\) converges to the matrix \(\frac{1}{p + q} \left[ \begin{array}{cc} q & q \\ p & p \end{array} \right]\) by first verifying inductively that \(P^m = \frac{1}{p + q} \left[ \begin{array}{cc} q & q \\ p & p \end{array} \right]\) \(+ \frac{(1 - p - q)^m}{p + q} \left[ \begin{array}{cc} p & -q \\ -p & q \end{array} \right]\) for \(m = 1, 2, \dots\). (It can be shown that the sequence of powers \(P, P^{2}, P^{3}, \dots\) of any regular transition matrix converges to the matrix each of whose columns equals the steady-state vector for \(P\).)
    1. Direct verification.
    2. Since \(0 < p < 1\) and \(0 < q < 1\) we get \(0 < p + q < 2\) whence \(-1 < p + q - 1 < 1\). Finally, \(-1 < 1 - p - q < 1\), so \((1 - p - q)^{m}\) converges to zero as \(m\) increases.

    2.9E: An Application to Markov Chains Exercises is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?