3.1: A Stochastic Model of Population Growth
The size of the population \(N\) is now considered to be a discrete random variable. We define the time-dependent probability mass function \(p_{N}(t)\) of \(N\) to be the probability that the population is of size \(N\) at time \(t\) . Since \(N\) must take on one of the values from zero to infinity, we have
\[\sum_{N=0}^{\infty} p_{N}(t)=1 \nonumber \]
for all \(t \geq 0 .\) Again, let \(b\) be the average per capita birth rate. We make the simplifying approximations that all births are singlets, and that the probability of an individual giving birth is independent of past birthing history. We can then interpret \(b\) probabilistically by supposing that as \(\Delta t \rightarrow 0\) , the probability that an individual gives birth during the time \(\Delta t\) is given by \(b \Delta t\) . For example, if the average per capita birthrate is one offspring every 365 -day year, then the probability that a given individual gives birth on a given day is \(1 / 365\) . As we will be considering the limit as \(\Delta t \rightarrow 0\) , we neglect probabilities of more than one birth in the population in the time interval \(\Delta t\) since they are of order \((\Delta t)^{2}\) or higher. Furthermore, we will suppose that at \(t=0\) , the population size is known to be \(N_{0}\) , so that \(p_{N_{0}}(0)=1\) , with all other \(p_{N}^{\prime}\) s at \(t=0\) equal to zero.
We can determine a system of differential equations for the probability mass function \(p_{N}(t)\) as follows. For a population to be of size \(N>0\) at a time \(t+\Delta t\) , either it was of size \(N-1\) at time \(t\) and one birth occurred, or it was of size \(N\) at time \(t\) and there were no births; that is
\[p_{N}(t+\Delta t)=p_{N-1}(t) b(N-1) \Delta t+p_{N}(t)(1-b N \Delta t) \nonumber \]
Subtracting \(p_{N}(t)\) from both sides, dividing by \(\Delta t\) , and taking the limit \(\Delta t \rightarrow 0\) results in the forward Kolmogorov differential equations,
\[\frac{d p_{N}}{d t}=b\left[(N-1) p_{N-1}-N p_{N}\right], \quad N=1,2, \ldots \nonumber \]
where \(p_{0}(t)=p_{0}(0)\) since a population of zero size remains zero. This system of coupled, first-order, linear differential equations can be solved iteratively. We first review how to solve a first-order linear differential equation of the form
\[\frac{d y}{d t}+a y=g(t), \quad y(0)=y_{0} \nonumber \]
where \(y=y(t)\) and \(a\) is constant. First, we look for an integrating factor \(\mu\) such that
\[\frac{d}{d t}(\mu y)=\mu\left(\frac{d y}{d t}+a y\right) \nonumber \]
Differentiating the left-hand-side and multiplying out the right-hand-side results in
\[\frac{d \mu}{d t} y+\mu \frac{d y}{d t}=\mu \frac{d y}{d t}+a \mu y \nonumber \]
and canceling terms yields
\[\frac{d \mu}{d t}=a \mu . \nonumber \]
We may integrate this equation with an arbitrary initial condition, so for simplicity we take \(\mu(0)=1\) . Therefore, \(\mu(t)=e^{a t}\) . Hence,
\[\frac{d}{d t}\left(e^{a t} y\right)=e^{a t} g(t) \nonumber \]
Integrating this equation from 0 to t yields
\[e^{a t} y(t)-y(0)=\int_{0}^{t} e^{a s} g(s) d s \nonumber \]
Therefore, the solution is
\[y(t)=e^{-a t}\left(y(0)+\int_{0}^{t} e^{a s} g(s) d s\right) \nonumber \]
The forward Kolmogorov differential equation (3.1.3) is of the form (3.1.4) with \(a=b N\) and \(g(t)=b(N-1) p_{N-1}\) . With the population size known to be \(N_{0}\) at \(t=0\) , the initial conditions can be written succinctly as \(p_{N}(0)=\delta_{N, N_{0}}\) , where \(\delta_{i j}\) is the Kronecker delta, defined as
\[\delta_{i j}= \begin{cases}0, & \text { if } i \neq j \\[4pt] 1, & \text { if } i=j\end{cases} \nonumber \]
Therefore, formal integration of (3.1.3) using (3.1.10) results in
\[p_{N}(t)=e^{-b N t}\left[\delta_{N, N_{0}}+b(N-1) \int_{0}^{t} e^{b N s} p_{N-1}(s) d s\right] \nonumber \]
The first few solutions of \((3.1.12)\) can now be obtained by successive integrations:
\[p_{N}(t)= \begin{cases}0, & \text { if } N<N_{0} \\[4pt] e^{-b N_{0} t}, & \text { if } N=N_{0} ; \\[4pt] N_{0} e^{-b N_{0} t}\left[1-e^{-b t}\right], & \text { if } N=N_{0}+1 ; \\[4pt] \frac{1}{2} N_{0}\left(N_{0}+1\right) e^{-b N_{0} t}\left[1-e^{-b t}\right]^{2}, & \text { if } N=N_{0}+2 ; \\[4pt] \cdots, & \text { if } \ldots\end{cases} \nonumber \]
Although we will not need this, for completeness I give the complete solution. By defining the binomial coefficient as the number of ways one can select \(\mathrm{k}\) objects from a set of n identical objects, where the order of selection is immaterial, we have
\[\left(\begin{array}{l} n \\[4pt] k \end{array}\right)=\frac{n !}{k !(n-k) !} \nonumber \]
(read as " \(n\) choose \(k^{\prime \prime}\) ). The general solution for \(p_{N}(t), N \geq N_{0}\) , is known to be
\[p_{N}(t)=\left(\begin{array}{c} N-1 \\[4pt] N_{0}-1 \end{array}\right) e^{-b N_{0} t}\left[1-e^{-b t}\right]^{N-N_{0}} \nonumber \]
which statisticians call a shifted negative binomial distribution. The determination of the time-evolution of the probability mass function of \(N\) completely solves this stochastic problem.
Of usual main interest is the mean and variance of the population size, and although both could in principle be computed from the probability mass function, we will compute them directly from the differential equation for \(p_{N}\) . The definitions of the mean population size \(\langle N\rangle\) and its variance \(\sigma^{2}\) are
\[\langle N\rangle=\sum_{N=0}^{\infty} N p_{N}, \quad \sigma^{2}=\sum_{N=0}^{\infty}(N-\langle N\rangle)^{2} p_{N} \nonumber \]
and we will make use of the equality
\[\sigma^{2}=\left\langle N^{2}\right\rangle-\langle N\rangle^{2} \nonumber \]
Multiplying the differential equation (3.1.3) by the constant \(N\) , summing over \(N\) , and using \(p_{N}=0\) for \(N<N_{0}\) , we obtain
\[\frac{d\langle N\rangle}{d t}=b\left[\sum_{N=N_{0}+1}^{\infty} N(N-1) p_{N-1}-\sum_{N=N_{0}}^{\infty} N^{2} p_{N}\right] \nonumber \]
Now, write \(N(N-1)=(N-1)(N-1+1)=(N-1)^{2}+(N-1)\) , so that the first term on the right-hand-side is
\[\begin{aligned} \sum_{N=N_{0}+1}^{\infty} N(N-1) p_{N-1} &=\sum_{N=N_{0}+1}^{\infty}(N-1)^{2} p_{N-1}+\sum_{N=N_{0}+1}^{\infty}(N-1) p_{N-1} \\[4pt] &=\sum_{N=N_{0}}^{\infty} N^{2} p_{N}+\sum_{N=N_{0}}^{\infty} N p_{N} \end{aligned} \nonumber \]
where the second equality was obtained by shifting the summation index downward by one. Therefore, we find the familiar Malthusian growth equation
\[\frac{d\langle N\rangle}{d t}=b\langle N\rangle . \nonumber \]
Together with the initial condition \(\langle N\rangle(0)=N_{0}\) , we can find the solution
\[\langle N\rangle(t)=N_{0} e^{b t} \nonumber \]
We proceed similarly to find \(\sigma^{2}\) by first determining the differential equation for \(\left\langle N^{2}\right\rangle\) . Multiplying the differential equation for \(p_{N},(3.1.3)\) , by \(N^{2}\) and summing over \(N\) results in
\[\frac{d\left\langle N^{2}\right\rangle}{d t}=b\left[\sum_{N=N_{0}+1}^{\infty} N^{2}(N-1) p_{N-1}-\sum_{N=N_{0}}^{\infty} N^{3} p_{N}\right] \nonumber \]
Here, we write \(N^{2}(N-1)=(N-1)(N-1+1)^{2}=(N-1)^{3}+2(N-1)^{2}+(N-1)\) . Proceeding in the same way as above by shifting the index downward, we obtain
\[\frac{d\left\langle N^{2}\right\rangle}{d t}-2 b\left\langle N^{2}\right\rangle=b\langle N\rangle \nonumber \]
Since \(\langle N\rangle\) is known, (3.1.22) is a first-order, linear, inhomogeneous equation for \(\left\langle N^{2}\right\rangle\) , which can be solved using an integrating factor. The solution obtained using (3.1.10) is
\[\left\langle N^{2}\right\rangle=e^{2 b t}\left(N_{0}^{2}+b \int_{0}^{t} e^{-2 b s}\langle N\rangle(s) d s\right) \nonumber \]
with \(\langle N\rangle\) given by (3.1.20). Performing the integration, we obtain
\[\left\langle N^{2}\right\rangle=e^{2 b t}\left[N_{0}^{2}+N_{0}\left(1-e^{-b t}\right)\right] \nonumber \]
Finally, using \(\sigma^{2}=\left\langle N^{2}\right\rangle-\langle N\rangle^{2}\) , we obtain the variance. Thus we arrive at our final result for the population mean and variance:
\[\langle N\rangle=N_{0} e^{b t}, \quad \sigma^{2}=N_{0} e^{2 b t}\left(1-e^{-b t}\right) \nonumber \]
The coefficient of variation \(c_{v}\) measures the standard deviation relative to the mean, and is here given by
\[\begin{aligned} c_{v} &=\sigma /\langle N\rangle \\[4pt] &=\sqrt{\frac{1-e^{-b t}}{N_{0}}} . \end{aligned} \nonumber \]
For large \(t\) , the coefficient of variation therefore goes like \(1 / \sqrt{N_{0}}\) , and is small when \(N_{0}\) is large. In the next section, we will determine the limiting form of the probability distribution for large \(N_{0}\) , recovering both the deterministic model and a Gaussian model approximation.