Skip to main content
Mathematics LibreTexts

3.2: Asymptotics of Large Initial Populations

  • Page ID
    93503
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Our goal here is to solve for an expansion of the distribution in powers of \(1 / N_{0}\) to leading-orders; notice that \(1 / N_{0}\) is small if \(N_{0}\) is large. To zeroth-order, that is in the limit \(N_{0} \rightarrow \infty\), we will show that the deterministic model of population growth is recovered. To first-order in \(1 / N_{0}\), we will show that the probability distribution is normal. The latter result will be seen to be a consequence of the well-known Central Limit Theorem in probability theory.

    We develop our expansion by working directly with the differential equation for \(p_{N}(t)\). Now, when the population size \(N\) is a discrete random variable (taking only the nonnegative integer values of \(0,1,2, \ldots), p_{N}(t)\) is the probability mass function for \(N .\) If \(N_{0}\) is large, then the discrete nature of \(N\) is inconsequential, and it is preferable to work with a continuous random variable and its probability density function. Accordingly, we define the random variable \(x=N / N_{0}\), and treat \(x\) as a continuous random variable, with \(0 \leq x<\infty\). Now, \(p_{N}(t)\) is the probability that the population is of size \(N\) at time \(t\), and the probability density function of \(x, P(x, t)\), is defined such that \(\int_{a}^{b} P(x, t) d x\) is the probability that \(a \leq x \leq b\). The relationship between \(p\) and \(P\) can be determined by considering how to approximate a discrete probability distribution by a continuous distribution, that is, by defining \(P\) such that

    \[\begin{aligned} p_{N}(t) &=\int_{\left(N-\frac{1}{2}\right) / N_{0}}^{\left(N+\frac{1}{2}\right) / N_{0}} P(x, t) d x \\[4pt] &=P\left(N / N_{0}, t\right) / N_{0} \end{aligned} \nonumber \]

    where the last equality becomes exact as \(N_{0} \rightarrow \infty\). Therefore, the appropriate definition for \(P(x, t)\) is given by

    \[P(x, t)=N_{0} p_{N}(t), \quad x=N / N_{0} \nonumber \]

    which satisfies

    \[\begin{aligned} \int_{0}^{\infty} P(x, t) d x &=\sum_{N=0}^{\infty} P\left(N / N_{0}, t\right)\left(1 / N_{0}\right) \\[4pt] &=\sum_{N=0}^{\infty} p_{N}(t) \\[4pt] &=1 \end{aligned} \nonumber \]

    the first equality (exact only when \(N_{0} \rightarrow \infty\) ) being a Reimann sum approximation of the integral.

    We now transform the infinite set of ordinary differential equations (3.1.3) for \(p_{N}(t)\) into a single partial differential equation for \(P(x, t)\). We multiply (3.1.3) by \(N_{0}\) and substitute \(N=N_{0} x, p_{N}(t)=P(x, t) / N_{0}\), and \(p_{N-1}(t)=P\left(x-\frac{1}{N_{0}}, t\right) / N_{0}\) to obtain

    \[\frac{\partial P(x, t)}{\partial t}=b\left[\left(N_{0} x-1\right) P\left(x-\frac{1}{N_{0}}, t\right)-N_{0} x P(x, t)\right] \nonumber \]

    We next Taylor series expand \(P\left(x-1 / N_{0}, t\right)\) around \(x\), treating \(1 / N_{0}\) as a small parameter. That is, we make use of

    \[\begin{aligned} P\left(x-\frac{1}{N_{0}}, t\right) &=P(x, t)-\frac{1}{N_{0}} P_{x}(x, t)+\frac{1}{2 N_{0}^{2}} P_{x x}(x, t)-\ldots \\[4pt] &=\sum_{n=0}^{\infty} \frac{(-1)^{n}}{n ! N_{0}^{n}} \frac{\partial^{n} P}{\partial x^{n}} . \end{aligned} \nonumber \]

    The two leading terms proportional to \(N_{0}\) on the right-hand-side of (3.2.2) cancel exactly, and if we group the remaining terms in powers of \(1 / N_{0}\), we obtain for the first three leading terms in the expansion

    \[\begin{align} \nonumber P_{t} &=-b\left[\left(x P_{x}+P\right)-\frac{1}{N_{0}}\left(\frac{x P_{x x}}{2 !}+\frac{P_{x}}{1 !}\right)+\frac{1}{N_{0}^{2}}\left(\frac{x P_{x x x}}{3 !}+\frac{P_{x x}}{2 !}\right)-\ldots\right] \\[4pt] &=-b\left[(x P)_{x}-\frac{1}{N_{0} 2 !}(x P)_{x x}+\frac{1}{N_{0}^{2} 3 !}(x P)_{x x x}-\ldots\right] \end{align} \nonumber \]

    and higher-order terms can be obtained by following the evident pattern.

    Equation (3.2.3) may be further analyzed by a perturbation expansion of the probability density function in powers of \(1 / N_{0}\) :

    \[P(x, t)=P^{(0)}(x, t)+\frac{1}{N_{0}} P^{(1)}(x, t)+\frac{1}{N_{0}^{2}} P^{(2)}(x, t)+\ldots \nonumber \]

    Here, the unknown functions \(P^{(0)}(x, t), P^{(1)}(x, t), P^{(2)}(x, t)\), etc. are to be determined by substituting the expansion (3.2.4) into \((3.2.3)\) and equating coefficients of powers of \(1 / N_{0}\). We thus obtain for the coefficients of \(\left(1 / N_{0}\right)^{0}\) and \(\left(1 / N_{0}\right)^{1}\),

    \[\begin{align} P_{t}^{(0)} &=-b\left(x P^{(0)}\right)_{x}^{\prime} \\[4pt] P_{t}^{(1)} &=-b\left[\left(x P^{(1)}\right)_{x}-\frac{1}{2}\left(x P^{(0)}\right)_{x x}\right] \end{align} \nonumber \]

    Derivation of the deterministic model

    The zeroth-order term in the perturbation expansion (3.2.4),

    \[P^{(0)}(x, t)=\lim _{N_{0} \rightarrow \infty} P(x, t) \nonumber \]

    satisfies (3.2.5). Equation (3.2.5) is a first-order linear partial differential equation with a variable coefficient. One way to solve this equation is to try the ansatz

    \[P^{(0)}(x, t)=h(t) f(r), \quad r=r(x, t), \nonumber \]

    together with the initial condition

    \[P^{(0)}(x, 0)=f(x) \nonumber \]

    since at \(t=0\), the probability distribution is assumed to be a known function. This initial condition imposes further useful constraints on the functions \(h(t)\) and \(r(x, t)\) :

    \[h(0)=1, \quad r(x, 0)=x \nonumber \]

    The partial derivatives of (3.2.8) are

    \[P_{t}^{(0)}=h^{\prime} f+h r_{t} f^{\prime}, \quad P_{x}^{(0)}=h r_{x} f^{\prime}, \nonumber \]

    which after substitution into (3.2.5) results in

    \[\left(h^{\prime}+b h\right) f+\left(r_{t}+b x r_{x}\right) h f^{\prime}=0 \nonumber \]

    This equation can be satisfied for any \(f\) provided that \(h(t)\) and \(r(x, t)\) satisfy

    \[h^{\prime}+b h=0, \quad r_{t}+b x r_{x}=0 \nonumber \]

    The first equation for \(h(t)\), together with the initial condition \(h(0)=1\), is easily solved to yield

    \[h(t)=e^{-b t} . \nonumber \]

    To determine a solution for \(r(x, t)\), we try the technique of separation of variables. We write \(r(x, t)=X(x) T(t)\), and upon substitution into the differential equation for \(r(x, t)\), we obtain

    \[X T^{\prime}+b x X^{\prime} T=0 \nonumber \]

    and division by XT and separation yields

    \[\frac{T^{\prime}}{T}=-b x \frac{X^{\prime}}{X} \nonumber \]

    Since the left-hand-side is independent of \(x\), and the right-hand-side is independent of \(t\), both the left-hand-side and right-hand-side must be constant, independent of both \(x\) and \(t .\) Now, our initial condition is \(r(x, 0)=x\), so that \(X(x) T(0)=x .\) Without lose of generality, we can take \(T(0)=1\), so that \(X(x)=x .\) The right-handside of (3.2.16) is therefore equal to the constant \(-b\), and we obtain the differential equation and initial condition

    \[T^{\prime}+b T=0, \quad T(0)=1 \nonumber \]

    which we can solve to yield \(T(t)=e^{-b t}\). Therefore, our solution for \(r(x, t)\) is

    \[r(x, t)=x e^{-b t} \nonumber \]

    By putting our solutions (3.2.14) and (3.2.18) together into our ansatz (3.2.8), we have obtained the general solution to the pde:

    \[P^{(0)}(x, t)=e^{-b t} f\left(x e^{-b t}\right) \nonumber \]

    To determine \(f\), we apply the initial condition of the probability mass function, \(p_{N}(0)=\delta_{N, N_{0}}\). From (3.10), the corresponding initial condition on the probability distribution function is

    \[P(x, 0)= \begin{cases}N_{0} & \text { if } 1-\frac{1}{2 N_{0}} \leq x \leq 1+\frac{1}{2 N_{0}} \\[4pt] 0 & \text { otherwise }\end{cases} \nonumber \]

    In the limit \(N_{0} \rightarrow \infty, P(x, 0) \rightarrow P^{(0)}(x, 0)=\delta(x-1)\), where \(\delta(x-1)\) is the Dirac delta-function, centered around 1 . The delta-function is widely used in quantum physics and was introduced by Dirac for that purpose. It now finds many uses in applied mathematics. It can be defined by requiring that, for any function \(g(x)\),

    \[\int_{-\infty}^{+\infty} g(x) \delta(x) d x=g(0) \nonumber \]

    The usual view of the delta-function \(\delta(x-a)\) is that it is zero everywhere except at \(x=a\) at which it is infinite, and its integral is one. It is not really a function, but it is what mathematicians call a distribution.

    Now, since \(P^{(0)}(x, 0)=f(x)=\delta(x-1)\), our solution becomes

    \[P^{(0)}(x, t)=e^{-b t} \delta\left(x e^{-b t}-1\right) \nonumber \]

    This can be rewritten by noting that (letting \(y=a x-c\) ),

    \[\begin{aligned} \int_{-\infty}^{+\infty} g(x) \delta(a x-c) d x &=\frac{1}{a} \int_{-\infty}^{+\infty} g((y+c) / a) \delta(y) d y \\[4pt] &=\frac{1}{a} g(c / a) \end{aligned} \nonumber \]

    yielding the identity

    \[\delta(a x-c)=\frac{1}{a} \delta\left(x-\frac{c}{a}\right)\nonumber \]

    From this, we can rewrite our solution \((3.2.22)\) in the more intuitive form

    \[P^{(0)}(x, t)=\delta\left(x-e^{b t}\right) \nonumber \]

    Using \((3.2.23)\), the zeroth-order expected value of \(x\) is

    \[\begin{aligned} \left\langle x_{0}\right\rangle &=\int_{0}^{\infty} x P^{(0)}(x, t) d x \\[4pt] &=\int_{0}^{\infty} x \delta\left(x-e^{b t}\right) d x \\[4pt] &=e^{b t} \end{aligned} \nonumber \]

    while the zeroth-order variance is

    \[\begin{aligned} \sigma_{x_{0}}^{2} &=\left\langle x_{0}^{2}\right\rangle-\left\langle x_{0}\right\rangle^{2} \\[4pt] &=\int_{0}^{\infty} x^{2} P^{(0)}(x, t) d x-e^{2 b t} \\[4pt] &=\int_{0}^{\infty} x^{2} \delta\left(x-e^{b t}\right) d x-e^{2 b t} \\[4pt] &=e^{2 b t}-e^{2 b t} \\[4pt] &=0 \end{aligned} \nonumber \]

    Thus, in the infinite population limit, the random variable \(x\) has zero variance, and is therefore no longer random, but follows \(x=e^{b t}\) deterministically. We say that the probability distribution of \(x\) becomes sharp in the limit of large population sizes. The general principle of modeling large populations deterministically can simplify mathematical models when stochastic effects are unimportant.

    Derivation of the normal probability distribution

    We now consider the first-order term in the perturbation expansion (3.2.4), which satisfies (3.2.6). We do not know how to solve (3.2.6) directly, so we will attempt to find a solution following a more circuitous route. First, we proceed by computing the moments of the probability distribution. We have

    \[\begin{aligned} \left\langle x^{n}\right\rangle &=\int_{0}^{\infty} x^{n} P(x, t) d x \\[4pt] &=\int_{0}^{\infty} x^{n} P^{(0)}(x, t) d x+\frac{1}{N_{0}} \int_{0}^{\infty} x^{n} P^{(1)}(x, t) d x+\ldots \\[4pt] &=\left\langle x_{0}^{n}\right\rangle+\frac{1}{N_{0}}\left\langle x_{1}^{n}\right\rangle+\ldots, \end{aligned} \nonumber \]

    where the last equality defines \(\left\langle x_{0}^{n}\right\rangle\), etc. Now, using (3.2.23),

    \[\begin{align} \nonumber \left\langle x_{0}^{n}\right\rangle &=\int_{0}^{\infty} x^{n} P^{(0)}(x, t) d x \\[4pt] \nonumber &=\int_{0}^{\infty} x^{n} \delta\left(x-e^{b t}\right) d x \\[4pt] &=e^{n b t} \end{align} \nonumber \]

    To determine \(\left\langle x_{1}^{n}\right\rangle\), we use \((3.2.6)\). Multiplying by \(x^{n}\) and integrating, we have

    \[\frac{d\left\langle x_{1}^{n}\right\rangle}{d t}=-b\left[\int_{0}^{\infty} x^{n}\left(x P^{(1)}\right)_{x} d x-\frac{1}{2} \int_{0}^{\infty} x^{n}\left(x P^{(0)}\right)_{x x} d x\right] \nonumber \]

    We integrate by parts to remove the derivatives of \(x P\), assuming that \(x P\) and all its derivatives vanish on the boundaries of integration, where \(x\) is equal to zero or infinity. We have

    \[\begin{aligned} \int_{0}^{\infty} x^{n}\left(x P^{(1)}\right)_{x} d x &=-\int_{0}^{\infty} n x^{n} P^{(1)} d x \\[4pt] &=-n\left\langle x_{1}^{n}\right\rangle \end{aligned} \nonumber \]

    and

    \[\begin{aligned} \frac{1}{2} \int_{0}^{\infty} x^{n}\left(x P^{(0)}\right)_{x x} d x &=-\frac{n}{2} \int_{0}^{\infty} x^{n-1}\left(x P^{(0)}\right)_{x} d x \\[4pt] &=\frac{n(n-1)}{2} \int_{0}^{\infty} x^{n-1} P^{(0)} d x \\[4pt] &=\frac{n(n-1)}{2}\left\langle x_{0}^{n-1}\right\rangle \end{aligned} \nonumber \]

    Therefore, after integration by parts, (3.2.25) becomes

    \[\frac{d\left\langle x_{1}^{n}\right\rangle}{d t}=-b\left[n\left\langle x_{1}^{n}\right\rangle+\frac{n(n-1)}{2}\left\langle x_{0}^{n-1}\right\rangle\right] \nonumber \]

    Equation (3.2.26) is a first-order linear inhomogeneous differential equation and can be solved using an integrating factor (see (3.1.10) and the preceding discussion). Solving this differential equation using (3.2.24) and the initial condition \(\left\langle x_{1}^{n}\right\rangle(0)=0\), we obtain

    \[\left\langle x_{1}^{n}\right\rangle=\frac{n(n-1)}{2} e^{n b t}\left(1-e^{-b t}\right) \nonumber \]

    The probability distribution function, accurate to order \(1 / N_{0}\), may be obtained by making use of the so-called moment generating function \(\Psi(s)\), defined as

    \[\begin{aligned} \Psi(s) &=\left\langle e^{s x}\right\rangle \\[4pt] &=1+s\langle x\rangle+\frac{s^{2}}{2 !}\left\langle x^{2}\right\rangle+\frac{s^{3}}{3 !}\left\langle x^{3}\right\rangle+\ldots \\[4pt] &=\sum_{n=0}^{\infty} \frac{s^{n}\left\langle x^{n}\right\rangle}{n !} \end{aligned} \nonumber \]

    To order \(1 / N_{0}\), we have

    \[\Psi(s)=\sum_{n=0}^{\infty} \frac{s^{n}\left\langle x_{0}^{n}\right\rangle}{n !}+\frac{1}{N_{0}} \sum_{n=0}^{\infty} \frac{s^{n}\left\langle x_{1}^{n}\right\rangle}{n !}+\mathrm{O}\left(1 / N_{0}^{2}\right) \nonumber \]

    Now, using (3.2.24),

    \[\begin{aligned} \sum_{n=0}^{\infty} \frac{s^{n}\left\langle x_{0}^{n}\right\rangle}{n !} &=\sum_{n=0}^{\infty} \frac{\left(s e^{b t}\right)^{n}}{n !} \\[4pt] &=e^{s e^{b t}} \end{aligned} \nonumber \]

    and using \((3.2.27)\),

    \[\begin{aligned} \sum_{n=0}^{\infty} \frac{s^{n}\left\langle x_{1}^{n}\right\rangle}{n !} &=\frac{1}{2}\left(1-e^{-b t}\right) \sum_{n=0}^{\infty} \frac{1}{n !} n(n-1)\left(s e^{b t}\right)^{n} \\[4pt] &=\frac{1}{2}\left(1-e^{-b t}\right) s^{2} e^{2 b t} \sum_{n=0}^{\infty} \frac{1}{n !} n(n-1)\left(s e^{b t}\right)^{n-2} \\[4pt] &=\frac{1}{2}\left(1-e^{-b t}\right) s^{2} \sum_{n=0}^{\infty} \frac{1}{n !} \frac{\partial^{2}}{\partial s^{2}}\left(s e^{b t}\right)^{n} \\[4pt] &=\frac{1}{2}\left(1-e^{-b t}\right) s^{2} \frac{\partial^{2}}{\partial s^{2}} \sum_{n=0}^{\infty} \frac{\left(s e^{b t}\right)^{n}}{n !} \\[4pt] &=\frac{1}{2}\left(1-e^{-b t}\right) s^{2} \frac{\partial^{2}}{\partial s^{2}}\left(e^{s e^{b t}}\right) \\[4pt] &=\frac{1}{2}\left(1-e^{-b t}\right) s^{2} e^{2 b t} e^{s e^{b t}} \end{aligned} \nonumber \]

    Therefore,

    \[\Psi(s)=e^{s e^{b t}}\left(1+\frac{1}{2 N_{0}}\left(1-e^{-b t}\right) s^{2} e^{2 b t}+\ldots\right) \nonumber \]

    We can recognize the parenthetical term of (3.2.29) as a Taylor-series expansion of an exponential function truncated to first-order, i.e.,

    \[\exp \left(\frac{1}{2 N_{0}}\left(1-e^{-b t}\right) s^{2} e^{2 b t}\right)=1+\frac{1}{2 N_{0}}\left(1-e^{-b t}\right) s^{2} e^{2 b t}+\mathrm{O}\left(1 / N_{0}^{2}\right) \nonumber \]

    Therefore, to first-order in \(1 / N_{0}\), we have

    \[\Psi(s)=\exp \left(s e^{b t}+\frac{1}{2 N_{0}}\left(1-e^{-b t}\right) s^{2} e^{2 b t}\right)+\mathrm{O}\left(1 / N_{0}^{2}\right) \nonumber \]

    Standard books on probability theory (e.g., A first course in probability by Sheldon Ross, pg. 365) detail the derivation of the moment generating function of a normal random variable:

    \[\Psi(s)=\exp \left(\langle x\rangle s+\frac{1}{2} \sigma^{2} s^{2}\right), \quad \text { for a normal random variable; } \nonumber \]

    and comparing (3.2.32) with (3.2.31) shows us that the probability distribution \(P(x, t)\) to first-order in \(1 / N_{0}\) is normal with the mean and variance given by

    \[\langle x\rangle=e^{b t}, \quad \sigma_{x}^{2}=\frac{1}{N_{0}} e^{2 b t}\left(1-e^{-b t}\right) \nonumber \]

    The mean and variance of \(x=N / N_{0}\) is equivalent to those derived for \(N\) in (3.1.25), but now we learn that \(N\) is approximately normal for large populations.

    The appearance of a normal probability distribution (also called a Gaussian probability distribution) in a first-order expansion is in fact a particular case of the Central Limit Theorem, one of the most important and useful theorems in probability and statistics. We state here a simple version of this theorem without proof:

    Central Limit Theorem: Suppose that \(X_{1}, X_{2}, \ldots, X_{n}\) are independent and identically distributed (iid) random variables with mean \(\langle X\rangle\) and variance \(\sigma_{X}^{2}\). Then for sufficiently large \(n\), the probability distribution of the average of the \(X_{i}\) ’s, denoted as the random variable \(Z=\frac{1}{n} \sum_{i=1}^{n} X_{i}\), is well approximated by a Gaussian with mean \(\langle X\rangle\) and variance \(\sigma^{2}=\) \(\sigma_{X}^{2} / n\)

    The central limit theorem can be applied directly to our problem. Consider that our population consists of \(N_{0}\) founders. If \(m_{i}(t)\) denotes the number of individuals descendent from founder \(i\) at time \(t\) (including the still living founder), then the total number of individuals at time \(t\) is \(N(t)=\sum_{i=1}^{N_{0}} m_{i}(t)\); and the average number of descendants of a single founder is \(x(t)=N(t) / N_{0}\). If the mean number of descendants of a single founder is \(\langle m\rangle\), with variance \(\sigma_{m}^{2}\), then by applying the central limit theorem for large \(N_{0}\), the probability distribution function of \(x\) is well approximated by a Gaussian with mean \(\langle x\rangle=\langle m\rangle\) and variance \(\sigma_{x}^{2}=\sigma_{m}^{2} / N_{0}\). Comparing with our results \((3.2.33)\), we find \(\langle m\rangle=e^{b t}\) and \(\sigma_{m}^{2}=e^{2 b t}\left(1-e^{-b t}\right)\).


    This page titled 3.2: Asymptotics of Large Initial Populations is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.