Skip to main content
Mathematics LibreTexts

6.1: Discrete Random Variables

  • Page ID
    105837
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Section 1: Introduction to Random Variables

    Definition: Random Variable

    A random variable represents an unknown quantity whose value depends on chance or the outcome of an experiment.

    Algebraic Variables

    Random Variables

    • Let \(x\) be the number of siblings that Bobert has.
    • We do not know \(x\), but we can definitely find out by asking Bobert.
    • \(x\) is an example of an algebraic variable.
    • Let \(X\) be the number of siblings of a randomly selected person in the class.
    • We do not know \(X\) until the person is randomly selected.
    • \(X\) is an example of a random variable.

    To distinguish between the random and algebraic variables we are going to label a random variable with a capital letter. Now, it is important to distinguish between the events and random variables.

    Events

    Random Variables

    • \(A\), \(B\), \(C\), \(D\), \(E\), \(F\), etc.
    • \(P(A)\) makes sense and means the probability of event \(A\).
    • \(Q\), \(R\), \(S\), \(T\), \(U\), \(V\), \(W\), etc.
    • \(P(X)\) makes no sense, but \(\{X=1\}\) is an event, and thus \(P(X=1)\) makes sense.
    Example \(\PageIndex{1.1}\)

    If \(X\) is the number of siblings of a randomly selected student then \(\{X=2\}\) is the event in which the number of siblings of a randomly selected person is 2, and \(P(X=2)\) denotes the probability of such event.

    We are still going to perform the same experiments as before but from now on we will keep an eye on the outcomes and events whose verbal descriptions can be quantified as in the following examples:

    • The product of the dice when two fair dice are rolled.
    • The number of sixes when four fair dice are rolled.
    • The number of cars in a randomly selected family.
    • The household income of a randomly selected household.
    • The lifetime of a randomly selected bulb etc.

    Note that we can use a random variable to denote the outcome of a roll of a die but the same cannot be done with a toss of a coin because the outcome of the experiment is not a number, but if we assign 0 to heads and 1 to tails (or any other combination of numbers) then we can introduce a random variable.

    clipboard_eabd90cf901ffddee4698aa4c218c01a1.png

    clipboard_e92b8b9d35513dc2896e2abce484fe607.png

    Since we are dealing with quantities there are two types of random variables: discrete and continuous.

    Definition: Discrete Random Variable

    A discrete random variable is a random variable whose all possible values can be listed.

    Definition: Continous Random Variable

    A continuous random variable is a random variable whose all possible values form an interval.

    Discrete Random Variable

    Continuous Random Variable

    • All possible values are listable.
    • All possible values form a continuous interval.

    Now, we can classify the variables as discrete or continuous:

    • The number of credit cards of a randomly selected person is discrete, because the possible values are 0, 1, 2, 3 etc.
    • The lifetime of a randomly selected bulb is continuous, because it can be any real number greater than 0 such as 0.765, 4.5632 etc.
    • The size of a randomly selected household is discrete, because the possible values are 0, 1, 2, 3 etc.
    • The monthly income of a randomly selected household is continuous, because it can be any number greater than 0 such as 45,763.21, 81,034.62, etc.

    Discrete Random Variable

    Continuous Random Variable

    Examples:

    • Number of credit cards
    • Household size

    Examples:

    • Lifetime of a bulb
    • Monthly income

    Now that we introduced the concept of a random variable, we will focus on the properties and applications of discrete and continuous random variables in this exact order.

    Section 2: Probability Distribution Table

    Let’s recall the definition of a discrete random variable as a random variable whose possible values can be listed. Consider tossing a coin with 0 assigned to heads and 1 to tails and let \(X\) be the outcome of the experiment. By definition, we can list the outcomes and their probabilities side-by-side in the form of a table.

    Definition: Probability distribution table

    The table that summarizes the possible values of a variable  and the corresponding probabilities is called the probability distribution table for X. Note that the sum of all probabilities must add up to 1.

    clipboard_e309e633c3637277b18b31ee4c7ccd11e.png

    The Probability Distribution Table for \(X\)

    \(X\)

    \(x_i\)

    \(P(X=x_i)\)

     

    0

    0.5

     

    1

    0.5

     

    Total:

    1

    Example \(\PageIndex{2.1}\)

    Consider rolling a die and let \(Y\) be the outcome of the experiment. We construct the probability distribution table for \(Y\) by listing all possible outcomes and their probabilities side-by-side in the form of a table.

    clipboard_ecb4da3ff519f7f46d037385ed00833d4.png

    The Probability Distribution Table for \(Y\)

    \(Y\)

    \(y_i\)

    \(P(Y=y_i)\)

     

    1

    1/6

     

    2

    1/6

     

    3

    1/6

     

    4

    1/6

     

    5

    1/6

     

    6

    1/6

     

    Total:

    1

    Again, note that the sum of all probabilities must add up to 1.

    Sometimes it takes more effort to create the probability distribution table. Consider the following example.

    Example \(\PageIndex{2.2}\)

    Let \(U\) be the number of Heads out of two tosses of a fair coin. To construct the probability distribution table for \(U\) we list all possible outcomes along with their probabilities side-by-side in the form of a table. This time the probability of each outcome is not that easy to observe and has to be computed.

    • The probability that \(U=0\) is the same as the probability of 0 heads among 2 tosses which we compute using the formula \(P(U=0)=\frac{C_0^2}{2^2}=0.25\)
    • Similarly, the probability that \(U=1\) is the same as the probability of 1 heads among 2 tosses which we again compute using the formula \(P(U=1)=\frac{C_1^2}{2^2}=0.50\)
    • Finally, the probability that \(U=2\) is the same as the probability of 2 heads among 2 tosses which we again compute using the formula \(P(U=2)=\frac{C_2^2}{2^2}=0.25\)

    Easy to check that the sum of all probabilities is equal to 1.

    The Probability Distribution Table for \(U\)

    \(U\)

    \(u_i\)

    \(P(U=u_i)\)

     

    0

    0.25

     

    1

    0.5

     

    2

    0.25

     

    Total:

    1

    Let’s increase the number of tosses and consider the following example.

    Example \(\PageIndex{2.3}\)

    Let \(V\) be the number of Heads out of three tosses of a fair coin. To construct the probability distribution table for \(V\) we list all possible outcomes along with their probabilities side-by-side in the form of a table. Again, the probability of each outcome is not that easy to observe and has to be computed.

    • The probability that \(V=0\) is the same as the probability of 0 heads among 3 tosses which we compute using the formula \(P(V=0)=\frac{C_0^3}{2^3}=0.125\)
    • Similarly, the probability that \(V=1\) is the same as the probability of 1 heads among 3 tosses which we again compute using the formula \(P(V=1)=\frac{C_1^3}{2^3}=0.375\)
    • Similarly, the probability that \(V=2\) is the same as the probability of 2 heads among 3 tosses which we again compute using the formula \(P(V=2)=\frac{C_2^3}{2^3}=0.375\)
    • Finally, the probability that \(V=3\) is the same as the probability of 3 heads among 3 tosses which we again compute using the formula \(P(V=3)=\frac{C_3^3}{2^3}=0.125\)

    Easy to check that the sum of all probabilities is equal to 1.

    The Probability Distribution Table for \(V\)

    \(V\)

    \(v_i\)

    \(P(V=v_i)\)

     

    0

    0.125

     

    1

    0.375

     

    2

    0.375

     

    3

    0.125

     

    Total:

    1

    Consider tossing an unfair coin.

    Example \(\PageIndex{2.4}\)

    Example: Let \(W\) be the number of Heads out of two tosses of an unfair coin for which the tails are twice as likely as heads. To construct the probability distribution table for \(W\) we list all possible outcomes along with their probabilities side-by-side in the form of a table. As in the previous two examples, the probability of each outcome is not that easy to observe and has to be computed.

    • The probability that \(W=0\) is the same as the probability of 0 heads among 2 tosses which we compute using the formula \(P(W=0)=C_0^2(\frac{1}{3})^0(\frac{2}{3})^2=0.44\)
    • Similarly, the probability that \(W=1\) is the same as the probability of 1 heads among 2 tosses which we again compute using the formula  \(P(W=1)=C_1^2(\frac{1}{3})^1(\frac{2}{3})^1=0.44\)
    • Finally, the probability that \(W=2\) is the same as the probability of 2 heads among 2 tosses which we again compute using the formula  \(P(W=2)=C_2^2(\frac{1}{3})^2(\frac{2}{3})^0=0.11\)

    Easy to check that the sum of all probabilities is equal to 1.

    The Probability Distribution Table for \(W\)

    \(W\)

    \(w_i\)

    \(P(W=w_i)\)

     

    0

    0.44

     

    1

    0.44

     

    2

    0.11

     

    Total:

    1

    Sometimes the table can be filled out using the theoretical probabilities and sometimes the frequency distribution table can be used as an approximation for probabilities. In the next example, we discuss how to use data to define a random variable.

    Example \(\PageIndex{2.5}\)

    Assume that the following data was collected and organized into the frequency distribution table.

    Number of cars

    Frequency

    Relative Frequency

    0

    6

    0.24

    1

    16

    0.64

    2

    2

    0.08

    3

    0

    0.00

    4

    1

    0.04

    Total:

    25

    1

    Although the data only represents a sample, assuming that the sample is representative, we can use it to approximate the distribution of the population. If we let \(Y\) be the number of cars owned by a randomly selected student, the relative frequency distribution table above can be easily converted into the probability distribution for \(Y\):

    \(Y\)

    \(y_i\)

    \(P(Y=y_i)\)

     

    0

    0.24

     

    1

    0.64

     

    2

    0.08

     

    3

    0.00

     

    4

    0.04

     

    Total:

    1

    Section 3: Probability Histogram

    Let \(X\) be the outcome of tossing a fair coin with 0 for heads and 1 for tail.

    clipboard_ec8c6cbeefd0240ca99a61d49ef4e010c.png

    As previously discussed, there is the probability distribution table for \(X\). One way to visualize this information about \(X\) is to draw the horizontal axis for all values of \(X\), and the vertical axis for the probabilities. Above each value of \(X\) we draw the rectangle of height equal to the corresponding probability. Such summary is called the probability histogram for \(X\).

    The Probability Distribution Table for \(X\)

    The Probability Histogram for \(X\)

    \(X\)

    \(x_i\)

    \(P(X=x_i)\)

     

    0

    0.5

     

    1

    0.5

     

    Total:

    1

     

    clipboard_ecf11620af735bfb71106ead315002a09.png

    Similarly, we can construct the probability histogram for variables \(U\), \(V\), \(W\), and \(Y\) from the previous examples.

    Example \(\PageIndex{3.1}\)

    Let \(U\) be the number of Heads out of two tosses of a fair coin.

    The Probability Distribution Table for \(U\)

    The Probability Histogram for \(U\)

    \(U\)

    \(u_i\)

    \(P(U=u_i)\)

     

    0

    0.25

     

    1

    0.50

     

    2

    0.25

     

    Total:

    1

     

    clipboard_e873a14957df4d3d2ac88de18238ba6f1.png

    Example \(\PageIndex{3.2}\)

    Example: Let \(V\) be the number of Heads out of three tosses of a fair coin.

    The Probability Distribution Table for \(V\)

    The Probability Histogram for \(V\)

    \(V\)

    \(v_i\)

    \(P(V=v_i)\)

     

    0

    0.125

     

    1

    0.375

     

    2

    0.375

     

    3

    0.125

     

    Total:

    1

     

        

    clipboard_e6363bc7a638f8429c6e1f09f2571f5ac.png

    Example \(\PageIndex{3.3}\)

    Example: Let \(W\) be the number of Heads out of two tosses of an unfair coin with  and .

    The Probability Distribution Table for \(W\)

    The Probability Histogram for \(W\)

    \(W\)

    \(w_i\)

    \(P(W=w_i)\)

     

    0

    0.44

     

    1

    0.44

     

    2

    0.11

     

    Total:

    1

     

    clipboard_e9b0b477af809beb7e05ed2626d80b9f3.png

    Example \(\PageIndex{3.4}\)

    Let \(Y\) be the number of cars owned by a randomly selected student.

    The Probability Distribution Table for \(Y\)

    The Probability Histogram for \(Y\)

    \(Y\)

    \(y_i\)

    \(P(Y=y_i)\)

     

    0

    0.24

     

    1

    0.64

     

    2

    0.08

     

    3

    0.00

     

    4

    0.04

     

    Total:

    1

     

    clipboard_e4f00d438c1c3b824f61bf1d0b289b690.png

    Just like we can turn the probability distribution table into the probability histogram, we can reverse the process, and construct the probability distribution table from the probability histogram by reading the chart and interpreting the numbers correctly.

    clipboard_e5d3cd928af766a49e76cc1ce813b0302.png

    This means that the probability distribution table and the probability histogram contain exactly the same information about the random variable.

    Section 4: Probability Rules

    Since random variables represent quantities, many events can be expressed in the form of an inequality, and therefore as a result we are interested in being able to compute the probabilities of such events.

    \(P(X=c)\)

    \(P(X<a)\)

    \(P(X\leq a)\)

    \(P(X>b)\)

    \(P(X\geq b)\)

    \(P(a<X<b)\)

    \(P(a\leq X<b)\)

    \(P(a<X\leq b)\)

    \(P(a\leq X\leq b)\)

    Previously we learned the probability rules for working with events in general. Next, we will adopt the rules of probability in the context of discrete random variables.

    For a discrete random variable :

    1. The total probability is 1.
    2. The inequality matters:

    \(P(X\leq a)=P(X<a)+P(X=a)\)

    1. The complementary rule:

    \(P(X\leq a)+P(X>a)=1\)

    \(P(X>a)=1-P(X\leq a)\)

    1. The subdivision rule:

    \(P(X\leq a)+P(a<X\leq b)=P(X\leq b)\)

    \(P(a<X\leq b)=P(X\leq b)-P(X\leq a)\)


    6.1: Discrete Random Variables is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?