Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

08 Special distributions and their properties

A small number of distributions come up again and again in real life applications of probability theory and as answers to natural theoretical questions. These are the so-called special distributions. In this chapter we survey the most important of the special distributions and some of their properties. The properties are either obvious identities, restatements of known results or are left as  (strongly recommended) exercises.

Below we use the following notation: if \(D_1\) and \(D_2\) are probability distributions, \(D_1 \eqdist D_2\) denotes that they are equal; \(D_1\indplus D_2\) denotes the distribution of the random variable \(X+Y\) where \(X\sim D_1\), \(Y\sim D_2\) and \(X,Y\) are independent. Similarly, \(\indsum_{k=1}^n D_k\) denotes the distribution of the random variable \(\sum_{k=1}^n X_k\) where \(X_1,\ldots,X_n\) are independent random variables such that \(X_k \sim D_k\) for \(k=1,\ldots,n\).

The Bernoulli distribution

The Bernoulli distribution models the probabilistic experiment of a single coin toss. We say that \(X\) has the Bernoulli distribution with parameter \(0<p<1\), and denote \(X \sim \berdist(p)\), if \(X\) satisfies

\[ \prob(X=0)=p = 1-\prob(X=1).\]

\noindent \textbf{Properties:}
\begin{enumerate}

\item \(\expec X = p\), \(\var(X) = p(1-p)\).
\item \(\berdist(1/2)\) is the distribution that maximizes the variance \(\var(X)\) subject to the constraint that \(0\le X\le 1\).
\end{enumerate}

The binomial distribution

We say that \(X\) has the binomial distribution with parameters \(n\ge 1\) and \(0<p<1\), and denote \(X\sim \bindist(n,p)\), if \(X\) satisfies

\[ \prob(X = k) = \binom{n}{k} p^k (1-p)^{n-k} \qquad (0\le k\le n).\]

Properties:

  1. \(\bindist(1,p) \eqdist \berdist(p)\).
  2. \(\bindist(n,p) \indplus \bindist(m,p) \eqdist \bindist(n+m,p)\).
  3. \(\bindist(n,p) \eqdist \indsum_{k=1}^n \berdist(p)\). That is, the binomial distribution models the number of successes when \(n\) identical experiments are performed independently, where each experiment has probability \(p\) of success.
  4. If \(X\sim \bindist(n,p)\) then \(\expec X = np\), \(\var(X) = np(1-p)\).

The geometric distribution

We say that \(X\) has the geometric distribution with parameter \(0<p<1\), and denote \(X\sim \geomdist(p)\), if \(X\) satisfies

\[ \prob(X=k) = p(1-p)^{k-1} \qquad (k \ge 1). \]

Some authors prefer a slightly different convention whereby the geometric random variables take nonnegative values (including \(0\)) rather than only positive values. Thus, denote \(X' \sim \geomzdist(p)\), and say that \(X'\) has the geometric distribution starting from \(0\), if it satisfies

\[ \prob(X'=k) = p(1-p)^{k} \qquad (k \ge ).\]

Properties:

  1. \(\geomdist(p) \eqdist \geomzdist(p) + 1\).
  2. If \(W_1,W_2,W_3, \ldots\) is a sequence of i.i.d.\ r.v.'s with distribution \(\berdist(p)\), then \[ X = \min\{ k\ge 1\,:\, W_k = 1\}  \sim \geomdist(p). \] That is, the distribution \(\geomdist(p)\) models the number of identical independent experiments we had to perform to get the first successful outcome, when each experiment has probability \(p\) of success. The variant \(\geomzdist(p)\) corresponds to the number of \emph{failed} experiments before the first success.
  3. The geometric distribution has the (discrete) \textbf{lack of memory property}. More precisely, if \(X \sim \geomdist(p)\) then \[ \prob(X \ge n+k \ |\ X \ge k) = \prob(X \ge n)\qquad \textrm{ for all \(n,k\ge 1$}.\]
  4. If \(X\sim \geomdist(p)\) then \(\expec X = \frac1p\), \(\var(X) = \frac{1-p}{p^2}\).
  5. If \(X'\sim \geomdist(p)\) then \(\expec X' = \frac{1-p}{p}\), \(\var(X') = \frac{1-p}{p^2}\).

The negative binomial distribution

We say that \(X\) has the negative binomial distribution with parameters \(m\ge1\) and \(0<p<1\), and denote \(X\sim \nbdist(m,p)\), if \(X\) satisfies

\[ \prob(X=k) = \binom{k+m-1}{k} p^m (1-p)^k \qquad (k\ge 0). \]

Properties:

  1. \(\nbdist(1,p) \eqdist \geomzdist(1-p)\).
  2. \(\nbdist(m,p) \indplus \nbdist(n,p) = \nbdist(n+m,p)\).
  3. If \(W_1,W_2,W_3, \ldots\) is a sequence of i.i.d.\ r.v.'s with distribution \(\berdist(p)\), then \[ X = \min\left\{ k\ge 0\,:\, \sum_{j=1}^{k+m} (1-W_j) = m\right\}  \sim \nbdist(m,p). \] In words, when performing a sequence of identical experiments, each with probability \(p\) of success, the number of successes observed before the \(m$th failure is distributed according to \(\nbdist(m,p)\).
  4. \(\nbdist(m,p) = \indsum_{k=1}^m \geomzdist(1-p)\).

The Poisson distribution

We say that \(X\) has the Poisson distribution with parameter \(\lambda>0\), and denote \(X\sim \poissondist(\lambda)\), if \(X\) satisfies

\[ \prob(X=k) = e^{-\lambda} \frac{\lambda^k}{k!} \qquad (k\ge 0).\]

\noindent \textbf{Properties:}
\begin{enumerate}

  1. \(\poissondist(\lambda) \indsum \poissondist(\mu) = \poissondist(\lambda+\mu)\).
  2. The Poisson distribution is the limit of the binomial distributions \(\bindist(n,p)\) where the number \(n\) of experiments tends to infinity and the probability \(p\) of success in each individual experiment goes to \(0\) in such a way that the mean number \(np\) of successes stays fixed. More precisely, if \(X\sim \poissondist(\lambda)\) and for each \(n\), \(W_n\) is a r.v. with distribution \(\bindist(n,\lambda/n)\), then \[ \prob(W_n = k) \xrightarrow[n\to\infty]{} \prob(X=k) \qquad (k\ge 0).\] (This is known as the \textbf{law of rare events}; see section \ref{sec:poisson-limit} for the proof of a similar result that holds in much greater generality.)
  3. If \(X \sim \poissondist(\lambda)\) then \(\expec X = \lambda\), \(\var(X)=\lambda\).

 

The uniform distribution

We say that \(X\) has the uniform distribution in the interval \([a,b]\), and denote \(X\sim U[a,b]\), if \(X\) has density function

\[ f_X(x) = \begin{cases} \frac{1}{b-a} & \textrm{if }a<x<b, \\ 0 & \textrm{otherwise}. \end{cases},\]

or equivalently if the c.d.f.\ of \(X\) is given by

\[ F_X(x) = \begin{cases} 0 & \textrm{if }x<a, \\
\frac{x-a}{b-a} & \textrm{if }a\le x\le b, \\
1 & \textrm{if }x>b. \end{cases}. \]

Properties:

  1. \(\expec(X) = \frac{a+b}{2}\), \(\var(X) = \frac{(b-a)^2}{12}\).

The normal distribution

We say that \(X\) has the normal (a.k.a.\ Gaussian) distribution with mean \(\mu\) and variance \(\sigma^2\), and denote \(X\sim N(\mu,\sigma^2)\), if \(X\) has density function

\[ f_X(x) = \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2} \qquad (x\in\R). \]

In particular, the standard normal distribution is the distribution \(N(0,1)\), whose density function is given by

\[ f_X(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \qquad (x\in\R).\]

Properties:

  1. If \(X\sim N(\mu,\sigma^2)\) then \(\expec(X)=\mu\), \(\var(X) =\sigma^2\).
  2. \(N(\mu_1,\sigma_1^2) \indplus N(\mu_2,\sigma_2^2) = N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2)\).
  3. If \(X,Y\sim N(0,1)\) are independent and standard normal then \(\frac{1}{\sqrt{2}}(X+Y) \sim N(0,1)\).
  4. More generally, if \(X_1,\ldots,X_n \sim N(0,1)\) are independent standard normal r.v.s then

\[ \frac{1}{\sqrt{n}}(X_1+\ldots+X_n) \sim N(0,1), \]
and also
\[ \sum_{j=1}^n \alpha_j X_j \sim N(0,1) \]
if \(\alpha_1,\ldots,\alpha_n\) are real numbers such that \(\sum_j \alpha_j^2 = 1\). (Geometrically,
\(\boldsymbol{\alpha}\cdot \mathbf{X} = \sum_{j=1}^n \alpha_j X_j\) can be interpreted as the projection of the random vector \((X_1,\ldots,X_n)\) in \(\R^n\) in the direction of the unit vector \(\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_n)\).)

\item If \(X\sim N(0,1)\) then \(\expec(X^k) = \begin{cases}
0 & \textrm{if \(k\) is odd}, \\
1\cdot 3 \cdot 5 \cdot \ldots \cdot (k-1) & \textrm{if \(k\) is even}.
\end{cases}
$

  1. The normal distribution is the single most important distribution in probability! The theoretical reason for this is the Central Limit Theorem, a result we will discuss in detail in chapters~11--14.
  2. The polar decomposition of a bivariate standard normal vector: given a pair \((X,Y)\) of random variables which in the polar representation are written as \(X=R\cos\Theta\), \(Y=R\sin \Theta\), where \(R>0\) and \(0\le \Theta<2\pi\), we have

\begin{align*}
X,Y\sim N(0,1), & \ X,Y\textrm{ are independent }
\\ & \iff R^2 \sim \expdist(1/2), \ \Theta\sim U[0,2\pi]\ \textrm{and \(R,\Theta\) are independent.}
\end{align*}

The exponential distribution

We say that \(X\) has the exponential distribution with parameter \(\lambda\), and denote \(X\sim \expdist(\lambda)\), if \(X\) has density function

\[ f_X(x) = \lambda e^{-\lambda x} \qquad (x\ge 0) \]

and the associated c.d.f.

\[ F_X(x) = \begin{cases} 0 & \textrm{if }x<0, \\
1-e^{-\lambda x} & \textrm{if }x\ge 0.
\end{cases}
\]

\noindent \textbf{Properties:}
\begin{enumerate}

  1. \(\lambda\) has the role of an (inverse) \textbf{scale parameter}, in the sense that for \(c>0\) we have that \(c \expdist(\lambda) \eqdist \expdist(\lambda / a)\); i.e., scaling an exponential r.v.\ by a factor \(c\) gives a new exponential r.v.\ where the scale parameter is divided by \(c\).
  2. The exponential distribution satisfies the \textbf{lack of memory property}. More precisely, if \(X\sim \expdist(\lambda)\) then \[ \prob(X > t+s \,|\, X>t) = \prob(X>s) \qquad (t,s>0). \] Furthermore, it is not hard to show that the exponential distribution is the unique distribution on \([0,\infty)\) satisfying this property.
  3. \(\expec(X) = \lambda\), \(\var(X) = \lambda^2\).
  4. The exponential distribution can be thought of as a scaling limit of geometric random variables, when the geometric distribution is interpreted as measuring \emph{time} rather than the number of experiments, and time is scaled so that the i.i.d.\ Bernoulli experiments are performed more and more frequently, but are becoming less and less probable to succeed, in such a way that the mean number of successful experiments per unit of time remains constant. More precisely, if \(\lambda>0\) is fixed, \(X\sim \expdist(\lambda)\), and for each \(n\) (larger than \(\lambda\)) we let \(W_n\) denote a random variable with distribution \(\geomdist(\lambda/n)\), then we have \[ \prob\left(\frac{1}{n} W_n > t\right)  \xrightarrow[n\to\infty]{} \prob(X > t) \qquad (t>0). \]
  5. If \(X\sim \expdist(\lambda)\) and \(Y \sim \expdist(\mu)\) are independent r.v.s then \(\min(X,Y) \sim \expdist(\lambda+\mu)\).
  6. If \(X_1,X_2,\ldots \( are i.i.d.\ \(\expdist(1)\) random variables, and we define the cumulative sums \(S_0=0\), \(S_n = \sum_{k=1}^n X_k\), then for each \(\lambda>0\), the random variable \[ N(\lambda) = \max\{ n\ge 0\,:\, S_n \le \lambda \}, \] then \(N(\lambda) \sim \poissondist(\lambda)\). \textbf{Note.} One can consider \(N(\lambda)\) not just for a single value of \(\lambda\) but the entire family \(N(t)\), where \(t>0\) is a parameter denoting time. When considered as such a family, \((N(t))_{t>0}\) is called a \textbf{Poisson process}.

The gamma distribution

To define the gamma distribution, first we define the \textbf{Euler gamma function} (also called the \textbf{generalized factorial function}), an important special function of mathematical analysis, denoted \(\Gamma(t)\), by

\[ \Gamma(t) = \int_0^\infty e^{-x} x^{t-1}\,dt \qquad (t>0).\]

\noindent \textbf{Properties of the gamma function:}

  1. \item \(\Gamma(n)=(n-1)!\) for integer \(n\ge1\).
  2. \item \(\Gamma(t+1) = t\, \Gamma(t)\) for all \(t>0\).
  3. \item \(\Gamma(1/2)=\sqrt{\pi}\).

Next, we say that \(X\) has the gamma distribution with parameters \(\alpha, \lambda>0\), and denote \(X\sim \gammadist(\lambda)\), if \(X\) has density function

\[ f_X(x) = \frac{\lambda^\alpha}{\Gamma(\alpha)} e^{-\lambda x} x^{\alpha-1} \qquad (x>0). \]

\noindent \textbf{Properties of the gamma distribution:}

  1. \item \(\expdist(\lambda) \eqdist \gammadist(1,\lambda)\).
  2. \item The parameter \(\lambda\) has the role of a scale parameter in the same sense as for the exponential distribution: for \(c>0\) we have \(c \gammadist(\alpha,\lambda) \eqdist \gammadist(\alpha, \lambda/c)\).
  3. \item \(\gammadist(\alpha,\lambda) \indplus \gammadist(\beta,\lambda) = \gammadist(\alpha+\beta,\lambda)\).
  4. \item \(\gammadist(\alpha,\lambda) = \indsum_{k=1}^n \expdist(\lambda)\).
  5. \item If \(X\sim \gammadist(\alpha,\lambda)\) then \(\expec X = \frac{\alpha}{\lambda}\), \(\var(X) = \frac{\alpha}{\lambda^2}\).

\end{enumerate}

The beta distribution

Define the \textbf{Euler beta function} (which is closely related to the Euler gamma function) by
\[ B(a,b) = \int_0^1 u^{a-1}(1-u)^{b-1}\,du \qquad (a,b>0). \]

\noindent \textbf{Properties of the beta function:}
\begin{enumerate}
\item \(B(a,b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)}\).
\item For integer \(m,n\ge 1\), \(B(m,n) = \frac{(m-1)! (n-1)!}{(m+n-1)!}\).
\end{enumerate}

We say that \(X\) has the beta distribution with parameters \(a,b>0\), and denote \(X\sim \betadist(a,b)\), if \(X\) has density function

\[ f_X(x) = \frac{1}{B(a,b)} x^{a-1} (1-x)^{b-1} \qquad (0<x<1). \]

\noindent \textbf{Properties of the beta distribution:}
 

  1. \(U[0,1] \eqdist \betadist(1,1)\).
  2. If \(X \sim \gammadist(\alpha,\lambda)\) and \(Y\sim \gammadist(\beta,\lambda)\) are independent, then \(U = \frac{X}{X+Y}\) has distribution \(\betadist(\alpha,\beta)\), and is independent of \(X+Y\).
  3. If \(X_1,X_2,\ldots\) is a sequence of i.i.d.\ r.v.s with distribution \(\expdist(\lambda)\), and \(S_m = \sum_{k=1}^m X_k\) are the cumulative sums of the sequence, then for all \(n\ge k\ge1\), \(S_k/S_n \sim \betadist(k,n-k)\), and \(S_k/S_n\) is independent of \(S_n\).
  4. If \(X\sim \betadist(a,b)\) then \(\expec X = \frac{a}{a+b}\), \(\var(X) = \frac{ab}{(a+b)^2 (a+b+1)}\).
  5. If \(X_1,\ldots,X_n\) are i.i.d.\ \(U[0,1]\) random variables, and \(X^{(1)} < X^{(2)} < \ldots < X^{(n)}\) are their order statistics, i.e., \(X^{(k)}\) is defined as the \(k$th smallest among the numbers \(X_1,\ldots,X_n\), then \(X^{(k)} \sim \betadist(k,n+1-k)\).

The Cauchy distribution

We say that \(X\) has the Cauchy distribution, and denote \(X \sim \cauchydist\), if \(X\) has the density

\[  f_X(x) = \frac{1}{\pi} \frac{1}{1+x^2}. \]

\noindent \textbf{Properties:}
\begin{enumerate}
\item \(\expec |X|=\infty\), i.e., the Cauchy distribution has no expectation.
\item If \(X,Y \sim \cauchydist\) are independent then their average \(\frac12(X+Y)\) is also distributed according to the Cauchy distribution.
\item More generally, if \(X_1,\ldots,X_n \sim \cauchydist\) and \(\alpha_1,\ldots,\alpha_n\ge 0\) are numbers such that \(\sum_j \alpha_j = 1\), then the weighted average
\[ \sum_{j=1}^n \alpha_j X_j \sim \cauchydist. \]
\item If \(\Theta \sim U[-\pi/2,\pi/2]\) then \(X=\tan \Theta \sim \cauchydist\).
\end{enumerate}


\newpage

\begin{landscape}


\thispagestyle{empty}
\setlength\textwidth{10.0in}
\setlength\oddsidemargin{-0.2in}
\setlength\textheight{8.4in}
\setlength\topmargin{-0.9in}

 

\begin{center}

\hspace{-120.0pt}
{\large \textbf{Summary: Special distributions}} \\[4pt]

%\begin{center}

\hspace{-120.0pt}
\begin{tabular}{|l|l|lc|c|c|c|}
\hline
Name & Notation & Formula & & \(\textbf{E}(X)\) & \(\textbf{V}(X)\) & \(\textbf{E}(X^k)\) \\
\hline
Discrete uniform & \(X\sim U\{1,\ldots,n\}\) & \(\prob(X=k) = \frac{1}{n}\) & \((1\le k\le n)\) &
$\frac{n+1}{2}$
& \(\frac{n^2-1}{12}\) & \\
Bernoulli & \(X\sim \textrm{Bernoulli}(p)\) & \(\prob(X=0)=1-p,\ \prob(X=1)=p$
&  & \(p\) & \(p(1-p)\) & \(p\) \\
Binomial & \(X\sim \textrm{Binomial}(n,p)\) & \(\prob(X=k)=\binom{n}{k}p^k (1-p)^{n-k}\) & \((0\le k\le n)$
& \(np\) & \(np(1-p)\) & \\
Geometric (from 0) & \(X\sim \textrm{Geom}_0(p)\) & \(\prob(X=k)=p(1-p)^{k}\) & \((k\ge 0)$
& \(\frac{1}{p}-1$& \(\frac{1-p}{p^2}$& \\
Geometric (from 1) & \(X\sim \textrm{Geom}(p)\) & \(\prob(X=k)=p(1-p)^{k-1}\) & \((k\ge 1)$
& \(\frac{1}{p}$& \(\frac{1-p}{p^2}\) & \\
Poisson & \(X\sim \textrm{Poisson}(\lambda)\) & \(\prob(X=k)=e^{-\lambda} \frac{\lambda^k}{k!}\) & \((k\ge 0)$
& \(\lambda\) & \(\lambda\) & \textrm{\scriptsize Bell numbers (for \(\lambda=1\))}\\
Negative binomial & \(X\sim \textrm{NB}(m,p)\) & \(\prob(X=k)=\binom{k+m-1}{m-1} p^m (1-p)^k$  & \((k\ge 0)$
& \(\frac{m(1-p)}{p}\) & \(\frac{m(1-p)}{p^2}\) & \\
\hline
Uniform & \(X\sim U(a,b)$  & \(f_X(x) = \frac{1}{b-a} \( & \((a<x<b)$
& \(\frac{a+b}{2}\) & \(\frac{(b-a)^2}{12}\) & \(\frac{b^{k+1}-a^{k+1}}{(k+1)(b-a)}\) \\
Exponential & \(X\sim \textrm{Exp}(\lambda)\) & \(f_X(x) = \lambda e^{-\lambda x}\) & \((x>0)$
& \(\frac{1}{\lambda}\) & \(\frac{1}{\lambda^2}\) & \(\lambda^{-k} k!\) \\
Standard normal & \(X\sim N(0,1)\) & \(f_X(x) = \frac{1}{\sqrt{2\pi} } e^{-x^2/2 }\) &
$(x\in\R)\) & \(0\) & \(1\) & \(\scriptsize \begin{cases} \frac{\displaystyle k!}{\displaystyle (k/2)!2^{k/2}} & \textrm{$k\) even}\\0&\textrm{$k\) odd}\end{cases}\) \\
Normal & \(X\sim N(\mu,\sigma^2)\) & \(f_X(x) = \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}\) &
$(x\in\R)\) & \(\mu\) & \(\sigma^2\) &  \\
Gamma & \(X\sim \textrm{Gamma}(\alpha, \lambda)\) & \(f_X(x)=\frac{\lambda^\alpha}{\Gamma(\alpha)}
e^{-\lambda x} x^{\alpha-1} \( & \((x>0)$
& \(\frac{\alpha}{\lambda}\) & \(\frac{\alpha}{\lambda^2}\) &  \(\lambda^{-k} \frac{\Gamma(\alpha+k)}{\Gamma(\alpha)}$\\
Cauchy & \(X \sim \textrm{Cauchy}\) & \(f_X(x) = \frac{1}{\pi} \frac{1}{1+x^2} \( & \((x\in\R)$  & N/A & N/A & N/A \\
Beta & \(X \sim \textrm{Beta}(a,b)\) & \(f_X(x) = \frac{1}{B(a,b)} x^{a-1} (1-x)^{b-1}$  & \((0<x<1)$
& \(\frac{a}{a+b}\) & \(\frac{a b}{(a+b)^2(a+b+1)} \(& \(\frac{B(a+k,b)}{B(a,b)}\) \\
Chi-squared & \(X\sim \chi^2_{(n)}\) & \(f_X(x) = \frac{1}{2^{n/2}\Gamma(n/2)} e^{-x/2} x^{\frac{n}{2}-1}\) & \((x>0)$
& \( n \( & \(2n$&
\\
\hline
\end{tabular}
\end{center}

\hspace{-120.0pt}
\noindent \textbf{Useful facts:} \qquad (``$\indplus$'' denotes convolution, i.e., sum of independent samples; ``$\eqdist$'' denotes equality of distributions)
$$
\hspace{-120.0pt}
\begin{array}{lll}
\textrm{Binomial}(n,p) \indplus \textrm{Binomial}(m,p) \eqdist \textrm{Binomial}(n+m,p)
& \ \ \ \ & \textrm{Gamma}(\alpha, \lambda) \indplus \textrm{Gamma}(\beta, \lambda) \eqdist \textrm{Gamma}(\alpha+\beta,\lambda) \\
\textrm{Poisson}(\lambda) \indplus \textrm{Poisson}(\mu) \eqdist \textrm{Poisson}(\lambda+\mu)
& &
N(\mu_1,\sigma_1^2) \indplus N(\mu_2,\sigma_2^2) \eqdist N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2)
\\
\textrm{Geom}_0(p) \eqdist \textrm{NB}(1,1-p)
& &
\textrm{Exp}(\lambda)\eqdist\textrm{Gamma}(1,\lambda) \\
\textrm{NB}(n,p)\indplus\textrm{NB}(m,p)\eqdist\textrm{NB}(n+m,p) & &
\big(\alpha\,\textrm{Cauchy}\big) \indplus \big((1-\alpha)\,\textrm{Cauchy}\big) \eqdist \textrm{Cauchy} \ \ \ \ (0\le\alpha\le1)
\\
N(0,1)^2 \eqdist \textrm{Gamma}(1/2,1/2) \eqdist \chi^2_{(1)}
& & \chi^2_{(n)} \eqdist \textrm{Gamma}(n/2,1/2)
\end{array}
$$

%\newpage
%
%\noindent \textbf{The Euler gamma and beta functions}
%
%\begin{center}
%\begin{tabular}{rcl}
%\textbf{Definitions:} & \ \ \ \ \  & \(
%\begin{array}{rcl}
%\ \ \ \ \ \ \Gamma(t) &=& \int_0^\infty e^{-x} x^{t-1}\,dx \qquad \qquad \quad (t>0) \\
%B(a,b) &=& \int_0^1 x^{a-1} (1-x)^{b-1}\, dx \qquad (a,b>0)
%\end{array}
%\) \\ \ \\
%\textbf{Functional equation:} & & \ \ \(\,\Gamma(t+1)\ \,=\ \ t\,\Gamma(t) \( \\ \ \\
%\textbf{Special values:} & & \,$\begin{array}{rcl}
%\Gamma(n+1)&=& n! \quad (n = 0,1,2,\ldots) \\
%\Gamma(1/2) &=& \sqrt{\pi} \\
%B(n,m) &=& \frac{(n-1)!(m-1)!}{(n+m-1)!} \quad (n,m=0,1,2,\ldots)
%\end{array}
%$
%\\ \ \\
%\textbf{Relation between \(\Gamma\) and \(B$:} & &
%\ \ \ \ \,\!\! \( B(a,b)\ =\ \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)} \(
%\end{tabular}
%\end{center}
\end{landscape}

Contributors