12.2: Gambler's Ruin**
In the last section, the simplest kind of symmetric random walk in \({\mathbf R}^1\) was studied. In this section, we remove the assumption that the random walk is symmetric. Instead, we assume that \(p\) and \(q\) are nonnegative real numbers with \(p+q = 1\), and that the common distribution function of the jumps of the random walk is \[f_X(x) = \left \{ \begin{array}{ll} p, & \mbox{if $x = 1$},\\ q, & \mbox{if $x = 1$}. \end{array} \right.\] One can imagine the random walk as representing a sequence of tosses of a weighted coin, with a head appearing with probability \(p\) and a tail appearing with probability \(q\). An alternative formulation of this situation is that of a gambler playing a sequence of games against an adversary (sometimes thought of as another person, sometimes called “the house") where, in each game, the gambler has probability \(p\) of winning.
The Gambler’s Ruin Problem
The above formulation of this type of random walk leads to a problem known as the Gambler’s Ruin problem. This problem was introduced in Exercise [exer 11.2.22], but we will give the description of the problem again. A gambler starts with a “stake" of size \(s\). She plays until her capital reaches the value \(M\) or the value 0. In the language of Markov chains, these two values correspond to absorbing states. We are interested in studying the probability of occurrence of each of these two outcomes.
One can also assume that the gambler is playing against an “infinitely rich" adversary. In this case, we would say that there is only one absorbing state, namely when the gambler’s stake is 0. Under this assumption, one can ask for the probability that the gambler is eventually ruined.
We begin by defining \(q_k\) to be the probability that the gambler’s stake reaches 0, i.e., she is ruined, before it reaches \(M\), given that the initial stake is \(k\). We note that \(q_0 = 1\) and \(q_M = 0\). The fundamental relationship among the \(q_k\)’s is the following: \[q_k = pq_{k+1} + qq_{k1}\ ,\] where \(1 \le k \le M1\). This holds because if her stake equals \(k\), and she plays one game, then her stake becomes \(k+1\) with probability \(p\) and \(k1\) with probability \(q\). In the first case, the probability of eventual ruin is \(q_{k+1}\) and in the second case, it is \(q_{k1}\). We note that since \(p + q = 1\), we can write the above equation as \[p(q_{k+1}  q_k) = q(q_k  q_{k1})\ ,\] or \[q_{k+1}  q_k = {q\over p}(q_k  q_{k1})\ .\] From this equation, it is easy to see that \[q_{k+1}  q_k = \biggl({q\over p}\biggr)^k(q_1  q_0)\ . \label{eq 12.2.2}\] We now use telescoping sums to obtain an equation in which the only unknown is \(q_1\): \[\begin{aligned} 1 &=& q_M  q_0 \\ &=& \sum_{k = 0}^{M1} (q_{k+1}  q_k)\ ,\end{aligned}\] so \[\begin{aligned} 1 &=& \sum_{k = 0}^{M1} \biggl({q\over p}\biggr)^k(q_1  q_0)\\ &=& (q_1  q_0) \sum_{k = 0}^{M1} \biggl({q\over p}\biggr)^k\ .\end{aligned}\] If \(p \ne q\), then the above expression equals \[(q_1  q_0)
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[24]/span, line 1, column 6
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[28]/span, line 1, column 10
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[31]/span[1], line 1, column 6
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[31]/span[2], line 1, column 6
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[32]/span[1], line 1, column 6
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[3]/span[32]/span[2], line 1, column 6
We define, for \(0 \le z \le M\), the quantity \(p_z\) to be the probability that the gambler’s stake reaches \(M\) without ever having reached 0. Since the game might continue indefinitely, it is not obvious that \(p_z + q_z = 1\) for all \(z\). However, one can use the same method as above to show that if \(p \ne q\), then \[q_z =
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[1]/p[4]/span[7]/span, line 1, column 6
Infinitely Rich Adversaries
We now turn to the problem of finding the probability of eventual ruin if the gambler is playing against an infinitely rich adversary. This probability can be obtained by letting \(M\) go to \(\infty\) in the expression for \(q_z\) calculated above. If \(q < p\), then the expression approaches \((q/p)^z\), and if \(q > p\), the expression approaches 1. In the case \(p = q = 1/2\), we recall that \(q_z = 1  z/M\). Thus, if \(M \rightarrow \infty\), we see that the probability of eventual ruin tends to 1.
Historical Remarks
In 1711, De Moivre, in his book , gave an ingenious derivation of the probability of ruin. The following description of his argument is taken from David.^{6} The notation used is as follows: We imagine that there are two players, A and B, and the probabilities that they win a game are \(p\) and \(q\), respectively. The players start with \(a\) and \(b\) counters, respectively.
Imagine that each player starts with his counters before him in a pile, and that nominal values are assigned to the counters in the following manner. A’s bottom counter is given the nominal value \(q/p\); the next is given the nominal value \((q/p)^2\), and so on until his top counter which has the nominal value \((q/p)^a\). B’s top counter is valued \((q/p)^{a+1}\), and so on downwards until his bottom counter which is valued \((q/p)^{a+b}\). After each game the loser’s top counter is transferred to the top of the winner’s pile, and it is always the top counter which is staked for the next game. Then B’s stake is always \(q/p\) times A’s, so that at every game each player’s nominal expectation is nil. This remains true throughout the play; therefore A’s chance of winning all B’s counters, multiplied by his nominal gain if he does so, must equal B’s chance multiplied by B’s nominal gain. Thus, \[P_a\biggl(\Bigl({q\over p}\Bigr)^{a+1} + \cdots + \Bigl({q\over p}\Bigr)^{a+b}\biggr) = P_b\biggl(\Bigl({q\over p}\Bigr) + \cdots + \Bigl({q\over p}\Bigr)^a\biggr)\ . \label{eq 12.2.1}\]
Using this equation, together with the fact that \[P_a + P_b = 1\ ,\] it can easily be shown that \[P_a =
Callstack:
at (Under_Construction/Map:_Introductory_Probability_(Grinstead_and_Snell)/Chapter_12:_Random_Walks/12.2:_Gambler's_Ruin**), /content/body/div[3]/p[2]/span[2]/span, line 1, column 6
In terms of modern probability theory, de Moivre is changing the values of the counters to make an unfair game into a fair game, which is called a martingale. With the new values, the expected fortune of player A (that is, the sum of the nominal values of his counters) after each play equals his fortune before the play (and similarly for player B). (For a simpler martingale argument, see Exercise [exer 12.2.10].) De Moivre then uses the fact that when the game ends, it is still fair, thus Equation [eq 12.2.1] must be true. This fact requires proof, and is one of the central theorems in the area of martingale theory.
i[exer 12.2.2] In the gambler’s ruin problem, assume that the gambler initial stake is 1 dollar, and assume that her probability of success on any one game is \(p\). Let \(T\) be the number of games until 0 is reached (the gambler is ruined). Show that the generating function for \(T\) is \[h(z) = \frac{1  \sqrt{1  4pqz^2}}{2pz}\ ,\] and that \[h(1) = \left \{ \begin{array}{ll} q/p, & \mbox{if $q \leq p$}, \\ 1, & \mbox{if $q \geq p,$} \end{array} \right.\] and \[h'(1) = \left \{ \begin{array}{ll} 1/(q  p), & \mbox{if $q > p$}, \\ \infty, & \mbox{if $q = p.$} \end{array} \right.\] Interpret your results in terms of the time \(T\) to reach 0. (See also Example [exam 10.1.7].)
i[exer 12.2.3] Show that the Taylor series expansion for \(\sqrt{1  x}\) is \[\sqrt{1  x} = \sum_{n = 0}^\infty {{1/2} \choose n} x^n\ ,\] where the binomial coefficient \({1/2} \choose n\) is \[{{1/2} \choose n} = \frac{(1/2)(1/2  1) \cdots (1/2  n + 1)}{n!}\ .\] Using this and the result of Exercise [exer 12.2.2], show that the probability that the gambler is ruined on the \(n\)th step is \[p_T(n) = \left \{ \begin{array}{ll} \frac{(1)^{k  1}}{2p} {{1/2} \choose k} (4pq)^k, & \mbox{if $n = 2k  1$,} \\ 0, & \mbox{if $n = 2k$.} \end{array} \right.\]
i[exer 12.2.4] For the gambler’s ruin problem, assume that the gambler starts with \(k\) dollars. Let \(T_k\) be the time to reach 0 for the first time.

Show that the generating function \(h_k(t)\) for \(T_k\) is the \(k\)th power of the generating function for the time \(T\) to ruin starting at 1. : Let \(T_k = U_1 + U_2 +\cdots+ U_k\), where \(U_j\) is the time for the walk starting at \(j\) to reach \(j  1\) for the first time.

Find \(h_k(1)\) and \(h_k'(1)\) and interpret your results.
i[exer 12.2.5] (The next three problems come from Feller.^{7}) As in the text, assume that \(M\) is a fixed positive integer.

Show that if a gambler starts with an stake of 0 (and is allowed to have a negative amount of money), then the probability that her stake reaches the value of \(M\) before it returns to 0 equals \(p(1  q_1)\).

Show that if the gambler starts with a stake of \(M\) then the probability that her stake reaches 0 before it returns to \(M\) equals \(qq_{M1}\).
i[exer 12.2.6] Suppose that a gambler starts with a stake of 0 dollars.

Show that the probability that her stake never reaches \(M\) before returning to 0 equals \(1  p(1  q_1)\).

Show that the probability that her stake reaches the value \(M\) exactly \(k\) times before returning to 0 equals \(p(1q_1)(1  qq_{M1})^{k1}(qq_{M1})\). : Use Exercise [exer 12.2.5].
i[exer 12.2.7] In the text, it was shown that if \(q < p\), there is a positive probability that a gambler, starting with a stake of 0 dollars, will never return to the origin. Thus, we will now assume that \(q \ge p\). Using Exercise [exer 12.2.6], show that if a gambler starts with a stake of 0 dollars, then the expected number of times her stake equals \(M\) before returning to 0 equals \((p/q)^M\), if \(q > p\) and 1, if \(q = p\). (We quote from Feller: “The truly amazing implications of this result appear best in the language of fair games. A perfect coin is tossed until the first equalization of the accumulated numbers of heads and tails. The gambler receives one penny for every time that the accumulated number of heads exceeds the accumulated number of tails by \(m\). ")
i[exer 12.2.8] In the game in Exercise [exer 12.2.7], let \(p = q = 1/2\) and \(M = 10\). What is the probability that the gambler’s stake equals \(M\) at least 20 times before it returns to 0?
i[exer 12.2.9] Write a computer program which simulates the game in Exercise [exer 12.2.7] for the case \(p = q = 1/2\), and \(M = 10\).
i[exer 12.2.10] In de Moivre’s description of the game, we can modify the definition of player A’s fortune in such a way that the game is still a martingale (and the calculations are simpler). We do this by assigning nominal values to the counters in the same way as de Moivre, but each player’s current fortune is defined to be just the value of the counter which is being wagered on the next game. So, if player A has \(a\) counters, then his current fortune is \((q/p)^a\) (we stipulate this to be true even if \(a = 0\)). Show that under this definition, player A’s expected fortune after one play equals his fortune before the play, if \(p \ne q\). Then, as de Moivre does, write an equation which expresses the fact that player A’s expected final fortune equals his initial fortune. Use this equation to find the probability of ruin of player A.
i[exer 12.2.11] Assume in the gambler’s ruin problem that \(p = q = 1/2\).

Using Equation [eq 12.2.2], together with the facts that \(q_0 = 1\) and \(q_M = 0\), show that for \(0 \le z \le M\), \[q_z = {{M  z}\over M}\ .\]

In Equation [eq 12.2.3], let \(p \rightarrow 1/2\) (and since \(q = 1  p\), \(q \rightarrow 1/2\) as well). Show that in the limit, \[q_z = {{M  z}\over M}\ .\] : Replace \(q\) by \(1p\), and use L’Hopital’s rule.
i[exer 12.2.12] In American casinos, the roulette wheels have the integers between 1 and 36, together with 0 and 00. Half of the nonzero numbers are red, the other half are black, and 0 and 00 are green. A common bet in this game is to bet a dollar on red. If a red number comes up, the bettor gets her dollar back, and also gets another dollar. If a black or green number comes up, she loses her dollar.

Suppose that someone starts with 40 dollars, and continues to bet on red until either her fortune reaches 50 or 0. Find the probability that her fortune reaches 50 dollars.

How much money would she have to start with, in order for her to have a 95% chance of winning 10 dollars before going broke?

A casino owner was once heard to remark that “If we took 0 and 00 off of the roulette wheel, we would still make lots of money, because people would continue to come in and play until they lost all of their money.” Do you think that such a casino would stay in business?