Skip to main content
Mathematics LibreTexts

7.10: The Binomial Distribution

  • Page ID
    129602
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    A baseball diamond is shown with a game in progress. The pitcher is pitching the ball to a batter at home plate.
    Figure 7.45: If one baseball team has a 65% chance of beating another in any single game, what’s the likelihood that they win a best-of-seven series? (credit: “baseball game” by Britt Reints/Flickr, CC BY 2.0)
    Learning Objectives

    1. Identify binomial experiments.
    2. Use the binomial distribution to analyze binomial experiments.

    It’s time for the World Series, which determines the champion for this season in Major League Baseball. The scrappy Los Angeles Angels are facing the powerhouse Cincinnati Reds. Computer models put the chances of the Reds winning any single game against the Angels at about 65%. The World Series, as its name implies, isn’t just one game, though: it’s what’s known as a “best-of-seven” contest: the teams play each other repeatedly until one team wins 4 games (which could take up to 7 games total, if each team wins three of the first 6 matchups). If the Reds truly have a 65% chance of winning a single game, then the probability that they win the series should be greater than 65%. Exactly how much bigger?

    If you have the patience for it, you could use a tree diagram like we used in Example 7.33 to trace out all of the possible outcomes, find all the related probabilities, and add up the ones that result in the Reds winning the series. Such a tree diagram would have 27=12827=128 final nodes, though, so the calculations would be very tedious. Fortunately, we have tools at our disposal that allow us to find these probabilities very quickly. This section will introduce those tools and explain their use.

    Binomial Experiments

    The tools of this section apply to multistage experiments that satisfy some pretty specific criteria. Before we move on to the analysis, we need to introduce and explain those criteria so that we can recognize experiments that fall into this category. Experiments that satisfy each of these criteria are called binomial experiments. A binomial experiment is an experiment with a fixed number of repeated independent binomial trials, where each trial has the same probability of success.

    Repeated Binomial Trials

    The first criterion involves the structure of the stages. Each stage of the experiment should be a replication of every other stage; we call these replications trials. An example of this is flipping a coin 10 times; each of the ten flips is a trial, and they all occur under the same conditions as every other. Further, each trial must have only two possible outcomes. These two outcomes are typically labeled “success” and “failure,” even if there is not a positive or negative connotation associated with those outcomes. Experiments with more than two outcomes in their sample spaces are sometimes reconsidered in a way that forces just two outcomes; all we need to do is completely divide the sample space into two parts that we can label “success” and “failure.” For example, your grade on an exam might be recorded as A, B, C, D, or F, but we could instead think of the grades A, B, C, and D as “success” and a grade of F as “failure.” Trials with only two outcomes are called binomial trials (the word binomial derives from Latin and Greek roots that mean “two parts”).

    Independent Trials

    The next criterion that we’ll be looking for is independence of trials. Back in Tree Diagrams, Tables, and Outcomes, we said that two stages of an experiment are independent if the outcome of one stage doesn’t affect the other stage. Independence is necessary for the experiments we want to analyze in this section.

    Fixed Number of Trials

    Next, we require that the number of trials in the experiment be decided before the experiment begins. For example, we might say “flip a coin 10 times.” The number of trials there is fixed at 10. However, if we say “flip a coin until you get 5 heads,” then the number of trials could be as low as 5, but theoretically it could be 50 or a 100 (or more)! We can’t apply the tools from this section in cases where the number of trials is indeterminate.

    Constant Probability

    The next criterion needed for binomial experiments is related to the independence of the trials. We must make sure that the probability of success in each trial is the same as the probability of success in every other trial.

    Example 7.34: Identifying Binomial Experiments

    Decide whether each of the following is a binomial experiment. For those that aren’t, identify which criterion or criteria are violated.

    1. You roll a standard 6-sided die 10 times and write down the number that appears each time.
    2. You roll a standard 6-sided die 10 times and write down whether the die shows a 6 or not.
    3. You roll a standard 6-sided die until you get a 6.
    4. You roll a standard 6-sided die 10 times. On the first roll, we define “success” as rolling a 4 or greater. After the first roll, we define “success” as rolling a number greater than the result of the previous roll.
    Answer
    1. Since we’re noting 1 of 6 possible outcomes, the trials are not binomial. So, this isn’t a binomial experiment.
    2. We have 2 possible outcomes (“6” and “not 6”), the trials are independent, the probability of success is the same every time, and the number of trials is fixed. This is a binomial experiment.
    3. Since the number of trials isn’t fixed (we don’t know if we’ll get our first 6 after 1 roll or 20 rolls or somewhere in between), this isn’t a binomial experiment.
    4. Here, the probability of success might change with every roll (on the first roll, that probability is 1212; if the first roll is a 6, the probability of success on the next roll is zero). So, this is not a binomial experiment.
    Your Turn 7.34

    Decide whether the following experiments are binomial experiments:

    Draw a card from a well-shuffled deck, note its suit, and replace it. Repeat this process 5 times.

    Draw 5 cards from a well-shuffled deck and count the number of \(♣\).

    Draw a card from a well-shuffled deck, note whether it is a \(♣\) or not, and replace it. Repeat this process 5 times.

    Draw cards from a well-shuffled deck until you get a \(♣\).

    The Binomial Formula

    If we flip a coin 100 times, you might expect the number of heads to be around 50, but you probably wouldn’t be surprised if the actual number of heads was 47 or 52. What is the probability that the number of heads is exactly 50? Or falls between 45 and 55? It seems unlikely that we would get more than 70 heads. Exactly how unlikely is that?

    Each of these questions is a question about the number of successes in a binomial experiment (flip a coin 100 times, “success” is flipping heads). We could theoretically use the techniques we’ve seen in earlier sections to answer each of these, but the number of calculations we’d have to do is astronomical; just building the tree diagram that represents this situation is more than we could complete in a lifetime; it would have 21001.3×103021001.3×1030 final nodes! To put that number in perspective, if we could draw 1,000 dots every second, and we started at the moment of the Big Bang, we’d currently be about 0.00000003% of the way to drawing out those final nodes. Luckily, there’s a shortcut called the Binomial Formula that allows us to get around doing all those calculations!

    FORMULA

    Binomial Formula: Suppose we have a binomial experiment with nn trials and the probability of success in each trial is pp. Then:

    P(number of successes isa)=Cna×pa×(1p)naP(number of successes isa)=Cna×pa×(1p)na.

    We can use this formula to answer one of our questions about 100 coin flips. What is the probability of flipping exactly 50 heads? In this case, n=100n=100, p=12p=12, and a=50a=50, so P(flip 50 heads)=C10050×(12)50×(112)10050P(flip 50 heads)=C10050×(12)50×(112)10050. Unfortunately, many calculators will balk at this calculation; that first factor (100C50100C50) is an enormous number, and the other two factors are very close to zero. Even if your calculator can handle numbers that large or small, the arithmetic can create serious errors in rounding off.

    Tech Check

    Luckily, spreadsheet programs have alternate methods for doing this calculation. In Google Sheets, we can use the BINOMDIST function to do this calculation for us. Open up a new sheet, click in any empty cell, and type “=BINOMDIST(50,100,0.5,FALSE)” followed by the Enter key. The cell will display the probability we seek; it’s about 8%. Let’s break down the syntax of that function in Google Sheets: enter “=BINOMDIST(aa, nn, pp, FALSE)” to find the probability of aa successes in nn trials with probability of success pp.

    Example 7.35: Using the Binomial Formula

    1. Find the probability of rolling a standard 6-sided die 4 times and getting exactly one 6 without using technology.
    2. Find the probability of rolling a standard 6-sided die 60 times and getting exactly ten 6s using technology.
    3. Find the probability of rolling a standard 6-sided die 60 times and getting exactly eight 6s using technology.
    Answer
    1. We’ll apply the Binomial Formula, where n=4n=4, a=1a=1, and p=16p=16:

      P(rolling one 6)=C14×(16)1×(56)41=4!1!(41)!×16×(56)3=4×16×5363=4×5364=5001,296. P(rolling one 6)=C14×(16)1×(56)41=4!1!(41)!×16×(56)3=4×16×5363=4×5364=5001,296.

    2. Here, n=60n=60, a=10a=10, and p=16p=16. In Google Sheets, we’ll enter “=BINOMDIST(10, 60, 1/6, FALSE)” to get our result: 0.137.
    3. This experiment is the same as in Exercise 2 of this example; we’re simply changing the number of successes from 10 to 8. Making that change in the formula in Google Sheets, we get the probability 0.116.
    Your Turn 7.35

    Compute the probabilities (rounded to 3 decimal places) of the following events related to rolling a standard 4-sided die (with faces labeled 1, 2, 3, and 4):

    You roll the die 10 times and get exactly four 2s.

    You roll the die 20 times and get exactly four 2s.

    You roll the die 30 times and get exactly four 2s.

    The Binomial Distribution

    If we are interested in the probability of more than just a single outcome in a binomial experiment, it’s helpful to think of the Binomial Formula as a function, whose input is the number of successes and whose output is the probability of observing that many successes. Generally, for a small number of trials, we’ll give that function in table form, with a complete list of the possible outcomes in one column and the probability in the other.

    For example, suppose Kristen is practicing her basketball free throws. Assume Kristen always makes 82% of those shots. If she attempts 5 free throws, then the Binomial Formula gives us these probabilities:

    Shots Made Probability
    0 0.000189
    1 0.004304
    2 0.0392144
    3 0.1786432
    4 0.4069096
    5 0.3707398

    A table that lists all possible outcomes of an experiment along with the probabilities of those outcomes is an example of a probability density function (PDF). A PDF may also be a formula that you can use to find the probability of any outcome of an experiment.

    Checkpoint

    Because they refer to the same thing, some sources will refer to the Binomial Formula as the Binomial PDF.

    If we want to know the probability of a range of outcomes, we could add up the corresponding probabilities. Going back to Kristen’s free throws, we can find the probability that she makes 3 or fewer of her 5 attempts by adding up the probabilities associated with the corresponding outcomes (in this case: 0, 1, 2, or 3):

    P(makes 3 or fewer)=P(a=0)+P(a=1)+P(a=2)+P(a=3)=0.000189+0.004304+0.0392144+0.1786432=0.2223506 P(makes 3 or fewer)=P(a=0)+P(a=1)+P(a=2)+P(a=3)=0.000189+0.004304+0.0392144+0.1786432=0.2223506

    The probability that the outcome of an experiment is less than or equal to a given number is called a cumulative probability. A table of the cumulative probabilities of all possible outcomes of an experiment is an example of a cumulative distribution function (CDF). A CDF may also be a formula that you can use to find those cumulative probabilities.

    Checkpoint

    Cumulative probabilities are always associated with events that are defined using . If other inequalities are used to define the event, we must restate the definition so that it uses the correct inequality.

    Here are the PDF and CDF for Kristen’s free throws:

    Shots Made Probability Cumulative
    0 0.000189 0.000189
    1 0.004304 0.004493
    2 0.0392144 0.0437073
    3 0.1786432 0.2223506
    4 0.4069096 0.6292602
    5 0.3707398 1
    Tech Check

    Google Sheets can also compute cumulative probabilities for us; all we need to do is change the “FALSE” in the formulas we used before to "TRUE."

    Example 7.36: Using the Binomial CDF

    Suppose we are about to flip a fair coin 50 times. Let HH represent the number of heads that result from those flips. Use technology to find the following:

    1. P(H22)P(H22)
    2. P(H<26)P(H<26)
    3. P(H>28)P(H>28)
    4. P(H20)P(H20)
    5. P(20<H<25)P(20<H<25)
    Answer
    1. The event here is defined by H22H22, which is the inequality we need to have if we want to use the Binomial CDF. In Google Sheets, we’ll enter “=BINOMDIST(22, 50, 0.5, TRUE)” to get our answer: 0.2399.
    2. This event uses the wrong inequality, so we need to do some preliminary work. If H<26H<26, that means H25H25 (because HH has to be a whole number). So, we’ll enter “=BINOMDIST(25, 50, 0.5, TRUE)” to find P(H<26)=P(H25)=0.5561P(H<26)=P(H25)=0.5561.
    3. The inequality associated with this event is pointing in the wrong direction. If EE is the event H>28H>28, that means that EE contains the outcomes {29, 30, 31, 32, 33, …}. Thus, EE must contain the outcomes {…, 25, 26, 27, 28}. In other words, EE is defined by H28H28. Since it uses , we can find P(E)P(E) using “=BINOMDIST(28, 50, 0.5, TRUE)”: 0.8389 So, using the formula for probabilities of complements, we have

      P(E)=1P(E)=10.8389=0.1611.P(E)=1P(E)=10.8389=0.1611.

    4. As in part 3, this inequality is pointing in the wrong direction. If FF is the event H20H20, then FF contains the outcomes {20, 21, 22, 23, …}. That means FF contains the outcomes {…, 16, 17, 18, 19}, and so FF is defined by H19H19. So, we can find P(F)P(F) using “=BINOMDIST(19, 50, 0.5, TRUE)”: 0.0595. Finally, using the formula for probabilities of complements, we get:

      P(F)=1P(F)=10.0595=0.9405.P(F)=1P(F)=10.0595=0.9405.

    5. If 20<H<2520<H<25, that means we are interested in the outcomes {21, 22, 23, 24}. This doesn’t look like any of the previous situations, but there is a way to find this probability using the Binomial CDF. We need to put everything in terms of “less than or equal to,” so we’ll first note that all of our outcomes are less than or equal to 24. But we don’t want to include values that are less than or equal to 20. So, we have three events: let II be the event defined by 20<H<2520<H<25 (note that we’re trying to find P(I)P(I)). Let JJ be defined by H24H24, and let KK be defined by H20H20. Of these three events, JJ contains the most outcomes. If JJ occurs, then either KK or II must have occurred. Moreover, KK and II are mutually exclusive. Thus, P(J)=P(K)+P(I)P(J)=P(K)+P(I), by the Addition Rule. Solving for the probability that we want, we get

      P(I)=P(J)P(K)=P(H24)P(H20)=0.443860.10132=0.34254.P(I)=P(J)P(K)=P(H24)P(H20)=0.443860.10132=0.34254.

    Your Turn 7.36

    You are about to roll a standard 6-sided die 20 times. Let \(S\) denote a success, which will be rolling a number greater than 4. Find the probabilities of the following events, rounded to 4 decimal places:

    \(S \leq 5\)

    \(S < 8\)

    \(S>10\)

    \(S≥7\)

    \(5<S<8\)

    Finally, we can answer the question posed at the beginning of this section. Remember that the Reds are facing the Angels in the World Series, which is won by the team who is first to win 4 games. The Reds have a 65% chance to win any game against the Angels. So, what is the probability that the Reds win the World Series? At first glance, this is not a binomial experiment: The number of games played is not fixed, since the series ends as soon as one team wins 4 games. However, we can extend this situation to a binomial experiment: Let’s assume that 7 games are always played in the World Series, and the winner is the team who wins more games. In a way, this is what happens in reality; it’s as though the first team to lose 4 games (and thus cannot win more than the other team) forfeits the rest of their games. So, we can treat the actual World Series as a binomial experiment with seven trials. If WW is the number of games won by the Reds, the probability that the Reds win the World Series is P(W4)P(W4). Using the techniques from the last example, we get P(Reds win the series)=0.8002P(Reds win the series)=0.8002.

    People in Mathematics: Abraham de Moivre

    Abraham de Moivre was born in 1667 in France to a Protestant family. Though he was educated in Catholic schools, he remained true to his faith; in 1687, he fled with his brother to London to escape persecution under the reign of King Louis XIV. Once he arrived in England, he supported himself as a freelance math tutor while he conducted his own research. Among his interests was probability; in 1711, he published the first edition of The Doctrine of Chances: A Method of Calculating the Probabilities of Events in Play. This book was the second textbook on probability (after Cardano’s Liber de ludo aleae). De Moivre discovered an important connection between the binomial distribution and thenormal distribution (an important concept in statistics; we’ll explore that distribution and its connection to the binomial distribution in Chapter 8). De Moivre also discovered some properties of a new probability distribution that later became known as the Poisson distribution.

    Check Your Understanding

    You are rolling a 6-sided die with 3 orange faces, 2 green faces, and 1 blue face.

    If you roll the die 5 times and note the color showing on each roll, is this a binomial experiment?

    If you roll the die 5 times and count the number times you roll a green face, is this a binomial experiment?

    If you count how many times you roll the die until you get a blue face, is this a binomial experiment?

    Suppose you’re rolling the same colored 6-sided die 10 times. Let \(O\), \(G\), and \(B\) represent the number of times the die lands with an orange, green, and blue side up, respectively. Find these probabilities (round to 4 decimal places):

    \(P(O \leq 7)\)

    \(P(G = 5)\)

    \(P(1 \leq B \leq 4)\)


    This page titled 7.10: The Binomial Distribution is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform.

    • Was this article helpful?