Skip to main content
Mathematics LibreTexts

2.2: Monte Carlo Simulation

  • Page ID
    122090
    • Franz S. Hover & Michael S. Triantafyllou
    • Massachusetts Institute of Technology via MIT OpenCourseWare
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Suppose that we make \(N\) simulations, each time drawing the needed random parameters \(x_i\) from a random number ”black box” (for our purposes, that black box will be the random package's random number generator). We define the high-level output, that is the result of our simulation, of our system \(S\) to be \(g(\vec{x})\). For simplicity, we say that \(g(\vec{x})\) is a scalar. \(g(\vec{x})\) can be virtually any output of interest, for example: the value of one state at a given time after an impulsive input, or the integral over time of the trajectory of one of the outputs, with a given input. In what follows, we will drop the vector notation on \(x\) for clarity.

    Let the estimator \(G\) of \(g(x)\) be defined as

    \[ G = \dfrac{1}{N} \sum_{j=1}^{N} g(x_j). \]

    You may recognize this as a straight average. Indeed, taking the expectation on both sides,

    \[ E(G) = \dfrac{1}{N} \sum_{j=1}^{N} E(g(x_i)), \]

    it is clear that \(E(G) = E(g)\). At the same time, however, we do not know \(E(g)\); we calculate \(G\) understanding that with a very large number of trials, \(G\) should approach \(E(g)\).

    Example: Dice

    In the previous section, we used a Monte-Carlo simulation to approximate the probability that a flipped coin will land on heads. We can do a similar simulation to approximate the probability that a die will land on a 3:

    import random
    
    tries = 10000
    success = 0
    for a in range(1,tries+1):
      flip = random.randrange(1,7)
      if flip == 3:
        success += 1
    print(success/tries)
    

    output:     0.1604

    This is relatively close to our expected probability of 1/6. Suppose instead that we want to roll two dice and add them up. We wish to find the probability that the sum of the two dice is 5. In this case, we call for two random numbers from 1 to 6, add them up, and call it a success if the sum is 5.

    import random
    
    tries = 10000
    success = 0
    for a in range(1,tries+1):
      flip1 = random.randrange(1,7)
      flip2 = random.randrange(1,7)
      if flip1 + flip2 == 5:
        success += 1
    print(success/tries)
    

    output:     0.1108

    Again, this is close to our expected value of 4/36.

    Example: Cards

    Finding the probability of a certain hand of cards is a bit trickier, mainly because we have to find a way to generate the entire deck, and then draw cards from the deck. Using arrays, though, this task can be completed with relative ease. 

    import numpy as np
    
    card_values = np.array(['A',2,3,4,5,6,7,8,9,10,'J','Q','K'])
    deck = np.array([])
    
    for i in card_values:
        for j in range(1,5):
            deck=np.append(deck,i)
    print(deck)
    

    output:     ['A', 'A', 'A', 'A', 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5,
                5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10,
                10, 10, 10, 'J', 'J', 'J', 'J', 'Q', 'Q', 'Q', 'Q', 'K', 'K',
                'K', 'K']

    The above does not keep track of suits, however this will be good enough for our purposes. Suppose now that we wish to draw a card from the deck, and determine the probability that the card is an Ace.

    import random
    import numpy as np
    
    card_values = np.array(['A',2,3,4,5,6,7,8,9,10,'J','Q','K'])
        
    deck = np.array([])
    #This iterates all 52 cards into a deck
    for i in card_values:
        for j in range(1,5):
            deck=np.append(deck,i)
    
    tries = 10000
    success = 0
    for a in range(1,tries+1):
      card1 = deck[random.randint(0,51)]
      if card1=="A":
        success += 1
    print(success/tries)
    

    output:     0.0785

    This is approximately equal to the expected value of 4/52.

    What is the probability that we draw two cards and both cards are aces? Since the second draw is affected by the first draw, we have to do a few things differently. First, we need to delete the first card from the deck before drawing the second (line 9 below). Next, we need to change the range of the second random index to account for the fact that we only have 51 cards remaining (line 10 below). Finally, we need to start with a new deck each time (line 6 below). All of this is done here, where we omit the deck generation:

    #deck generation omitted: see previous example.
    
    tries = 10000
    success = 0
    for a in range(1,tries+1):
      testdeck = deck
      index1=random.randint(0,51)
      card1 = testdeck[index1]
      testdeck=np.delete(testdeck,index1)
      index2=random.randint(0,50)
      card2 = testdeck[index2]
      if (card1=="J" or card1=="Q" or card1=="K") and \
          (card2=="J" or card2=="Q" or card2=="K"):
        success += 1
    print(success/tries)
    

    output:     0.049

    This is close to our expected value of (12/52)*(11/52). Note that in the code above, we use a backslash one line 12 to tell python we are continuing the current code on the next line.


    This page titled 2.2: Monte Carlo Simulation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Franz S. Hover & Michael S. Triantafyllou (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.