# 5.6: Numerical Integration

- Page ID
- 107828

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)- How do we accurately evaluate a definite integral such as \(\int_0^1 e^{-x^2} \, dx\) when we cannot use the First Fundamental Theorem of Calculus because the integrand lacks an elementary algebraic antiderivative? Are there ways to generate accurate estimates without using extremely large values of \(n\) in Riemann sums?
- What is the Trapezoid Rule, and how is it related to left, right, and middle Riemann sums?
- How are the errors in the Trapezoid Rule and Midpoint Rule related, and how can they be used to develop an even more accurate rule?

When we first explored finding the net signed area bounded by a curve, we developed the concept of a Riemann sum as a helpful estimation tool and a key step in the definition of the definite integral. Recall that the left, right, and middle Riemann sums of a function \(f\) on an interval \([a,b]\) are given by

\begin{align} L_n = f(x_0) \Delta x + f(x_1) \Delta x + \cdots + f(x_{n-1}) \Delta x &= \sum_{i = 0}^{n-1} f(x_i) \Delta x,\label{E-Left}\\[4pt] R_n = f(x_1) \Delta x + f(x_2) \Delta x + \cdots + f(x_{n}) \Delta x &= \sum_{i = 1}^{n} f(x_i) \Delta x,\label{E-Right}\\[4pt] M_n = f(\overline{x}_1) \Delta x + f(\overline{x}_2) \Delta x + \cdots + f(\overline{x}_{n}) \Delta x &= \sum_{i = 1}^{n} f(\overline{x}_i) \Delta x\text{,}\label{E-Mid} \end{align}

where \(x_0 = a\text{,}\) \(x_i = a + i\Delta x\text{,}\) \(x_n = b\text{,}\) and \(\Delta x = \frac{b-a}{n}\text{.}\) For the middle sum, we defined \(\overline{x}_{i} = (x_{i-1} + x_i)/2\text{.}\)

A Riemann sum is a sum of (possibly signed) areas of rectangles. The value of \(n\) determines the number of rectangles, and our choice of left endpoints, right endpoints, or midpoints determines the heights of the rectangles. We can see the similarities and differences among these three options in Figure \(\PageIndex{1}\), where we consider the function \(f(x) = \frac{1}{20}(x-4)^3 + 7\) on the interval \([1,8]\text{,}\) and use 5 rectangles for each of the Riemann sums.

While it is a good exercise to compute a few Riemann sums by hand, just to ensure that we understand how they work and how varying the function, the number of subintervals, and the choice of endpoints or midpoints affects the result, using computing technology is the best way to determine \(L_n\text{,}\) \(R_n\text{,}\) and \(M_n\text{.}\) Any computer algebra system will offer this capability; as we saw in Preview Activity 4.3.1, a straightforward option that is freely available online is the applet^{ 1 } at http://gvsu.edu/s/a9. Note that we can adjust the formula for \(f(x)\text{,}\) the window of \(x\)- and \(y\)-values of interest, the number of subintervals, and the method. (See Preview Activity 4.3.1 for any needed reminders on how the applet works.)

In this section we explore several different alternatives for estimating definite integrals. Our main goal is to develop formulas to estimate definite integrals accurately without using a large numbers of rectangles.

As we begin to investigate ways to approximate definite integrals, it will be insightful to compare results to integrals whose exact values we know. To that end, the following sequence of questions centers on \(\int_0^3 x^2 \, dx\text{.}\)

- Use the applet at http://gvsu.edu/s/a9 with the function \(f(x) = x^2\) on the window of \(x\) values from \(0\) to \(3\) to compute \(L_3\text{,}\) the left Riemann sum with three subintervals.
- Likewise, use the applet to compute \(R_3\) and \(M_3\text{,}\) the right and middle Riemann sums with three subintervals, respectively.
- Use the Fundamental Theorem of Calculus to compute the exact value of \(I = \int_0^3 x^2 \, dx\text{.}\)
- We define the
*error*that results from an approximation of a definite integral to be the approximation's value minus the integral's exact value. What is the error that results from using \(L_3\text{?}\) From \(R_3\text{?}\) From \(M_3\text{?}\) - In what follows in this section, we will learn a new approach to estimating the value of a definite integral known as the Trapezoid Rule. The basic idea is to use trapezoids, rather than rectangles, to estimate the area under a curve. What is the formula for the area of a trapezoid with bases of length \(b_1\) and \(b_2\) and height \(h\text{?}\)
- Working by hand, estimate the area under \(f(x) = x^2\) on \([0,3]\) using three subintervals and three corresponding trapezoids. What is the error in this approximation? How does it compare to the errors you calculated in (d)?

## The Trapezoid Rule

So far, we have used the simplest possible quadrilaterals (that is, rectangles) to estimate areas. It is natural, however, to wonder if other familiar shapes might serve us even better.

An alternative to \(L_n\text{,}\) \(R_n\text{,}\) and \(M_n\) is called the *Trapezoid Rule*. Rather than using a rectangle to estimate the (signed) area bounded by \(y = f(x)\) on a small interval, we use a trapezoid. For example, in Figure \(\PageIndex{2}\), we estimate the area under the curve using three subintervals and the trapezoids that result from connecting the corresponding points on the curve with straight lines.

The biggest difference between the Trapezoid Rule and a Riemann sum is that on each subinterval, the Trapezoid Rule uses two function values, rather than one, to estimate the (signed) area bounded by the curve. For instance, to compute \(D_1\text{,}\) the area of the trapezoid on \([x_0, x_1]\text{,}\) we observe that the left base has length \(f(x_0)\text{,}\) while the right base has length \(f(x_1)\text{.}\) The height of the trapezoid is \(x_1 - x_0 = \Delta x = \frac{b-a}{3}\text{.}\) The area of a trapezoid is the average of the bases times the height, so we have

Using similar computations for \(D_2\) and \(D_3\text{,}\) we find that \(T_3\text{,}\) the trapezoidal approximation to \(\int_a^b f(x) \, dx\) is given by

Because both left and right endpoints are being used, we recognize within the trapezoidal approximation the use of both left and right Riemann sums. Rearranging the expression for \(T_3\) by removing factors of \(\frac{1}{2}\) and \(\Delta x \text{,}\) grouping the left endpoint and right endpoint evaluations of \(f\text{,}\) we see that

We now observe that two familiar sums have arisen. The left Riemann sum \(L_3\) is \(L_3 = f(x_0) \Delta x + f(x_1) \Delta x + f(x_2) \Delta x\text{,}\) and the right Riemann sum is \(R_3 = f(x_1) \Delta x + f(x_2) \Delta x + f(x_3) \Delta x\text{.}\) Substituting \(L_3\) and \(R_3\) for the corresponding expressions in Equation \ref{xGO}, it follows that \(T_3 = \frac{1}{2} \left[ L_3 + R_3 \right]\text{.}\) We have thus seen a very important result: using trapezoids to estimate the (signed) area bounded by a curve is the same as averaging the estimates generated by using left and right endpoints.

The trapezoidal approximation, \(T_n\text{,}\) of the definite integral \(\int_a^b f(x) \, dx\) using \(n\) subintervals is given by the rule

Moreover, \(T_n = \frac{1}{2} \left[ L_n + R_n \right]\text{.}\)

In this activity, we explore the relationships among the errors generated by left, right, midpoint, and trapezoid approximations to the definite integral \(\int_1^2 \frac{1}{x^2} \, dx\text{.}\)

- Use the First FTC to evaluate \(\int_1^2 \frac{1}{x^2} \, dx\) exactly.
- Use appropriate computing technology to compute the following approximations for \(\int_1^2 \frac{1}{x^2} \, dx\text{:}\) \(T_4\text{,}\) \(M_4\text{,}\) \(T_8\text{,}\) and \(M_8\text{.}\)
- Let the
*error*that results from an approximation be the approximation's value minus the exact value of the definite integral. For instance, if we let \(E_{T,4}\) represent the error that results from using the trapezoid rule with 4 subintervals to estimate the integral, we have\[ E_{T,4} = T_4 - \int_1^2 \frac{1}{x^2} \, dx \text{.} \nonumber \]Similarly, we compute the error of the midpoint rule approximation with 8 subintervals by the formula

\[ E_{M,8} = M_8 - \int_1^2 \frac{1}{x^2} \, dx\text{.} \nonumber \]Based on your work in (a) and (b) above, compute \(E_{T,4}\text{,}\) \(E_{T,8}\text{,}\) \(E_{M,4}\text{,}\) \(E_{M,8}\text{.}\)

- Which rule consistently over-estimates the exact value of the definite integral? Which rule consistently under-estimates the definite integral?
- What behavior(s) of the function \(f(x) = \frac{1}{x^2}\) lead to your observations in (d)?

## Comparing the Midpoint and Trapezoid Rules

We know from the definition of the definite integral that if we let \(n\) be large enough, we can make any of the approximations \(L_n\text{,}\) \(R_n\text{,}\) and \(M_n\) as close as we'd like (in theory) to the exact value of \(\int_a^b f(x) \, dx\text{.}\) Thus, it may be natural to wonder why we ever use any rule other than \(L_n\) or \(R_n\) (with a sufficiently large \(n\) value) to estimate a definite integral. One of the primary reasons is that as \(n \to \infty\text{,}\) \(\Delta x = \frac{b-a}{n} \to 0\text{,}\) and thus in a Riemann sum calculation with a large \(n\) value, we end up multiplying by a number that is very close to zero. Doing so often generates roundoff error, because representing numbers close to zero accurately is a persistent challenge for computers.

Hence, we explore ways to estimate definite integrals to high levels of precision, but without using extremely large values of \(n\text{.}\) Paying close attention to patterns in errors, such as those observed in Activity \(\PageIndex{2}\), is one way to begin to see some alternate approaches.

To begin, we compare the errors in the Midpoint and Trapezoid rules. First, consider a function that is concave up on a given interval, and picture approximating the area bounded on that interval by both the Midpoint and Trapezoid rules using a single subinterval.

As seen in Figure \(\PageIndex{3}\), it is evident that whenever the function is concave up on an interval, the Trapezoid Rule with one subinterval, \(T_1\text{,}\) will overestimate the exact value of the definite integral on that interval. From a careful analysis of the line that bounds the top of the rectangle for the Midpoint Rule (shown in magenta), we see that if we rotate this line segment until it is tangent to the curve at the midpoint of the interval (as shown at right in Figure \(\PageIndex{3}\)), the resulting trapezoid has the same area as \(M_1\text{,}\) and this value is less than the exact value of the definite integral. Thus, when the function is concave up on the interval, \(M_1\) underestimates the integral's true value.

These observations extend easily to the situation where the function's concavity remains consistent but we use larger values of \(n\) in the Midpoint and Trapezoid Rules. Hence, whenever \(f\) is concave up on \([a,b]\text{,}\) \(T_n\) will overestimate the value of \(\int_a^b f(x) \, dx\text{,}\) while \(M_n\) will underestimate \(\int_a^b f(x) \, dx\text{.}\) The reverse observations are true in the situation where \(f\) is concave down.

Next, we compare the size of the errors between \(M_n\) and \(T_n\text{.}\) Again, we focus on \(M_1\) and \(T_1\) on an interval where the concavity of \(f\) is consistent. In Figure \(\PageIndex{4}\), where the error of the Trapezoid Rule is shaded in red, while the error of the Midpoint Rule is shaded lighter red, it is visually apparent that the error in the Trapezoid Rule is more significant. To see how much more significant, let's consider two examples and some particular computations.

If we let \(f(x) = 1-x^2\) and consider \(\int_0^1 f(x) \,dx\text{,}\) we know by the First FTC that the exact value of the integral is

Using appropriate technology to compute \(M_4\text{,}\) \(M_8\text{,}\) \(T_4\text{,}\) and \(T_8\text{,}\) as well as the corresponding errors \(E_{M,4}\text{,}\) \(E_{M,8}\text{,}\) \(E_{T,4}\text{,}\) and \(E_{T,8}\text{,}\) as we did in Activity \(\PageIndex{2}\), we find the results summarized in Table \(\PageIndex{5}\). We also include the approximations and their errors for the example \(\int_1^2 \frac{1}{x^2} \, dx\) from Activity \(\PageIndex{2}\).

Rule | \(\int_0^1 (1-x^2) \,dx = 0.\overline{6}\) | error | \(\int_1^2 \frac{1}{x^2} \, dx = 0.5\) | error |

\(T_4\) | \(0.65625\) | \(-0.0104166667\) | \(0.5089937642\) | \(0.0089937642\) |

\(M_4\) | \(0.671875\) | \(0.0052083333\) | \(0.4955479365\) | \(-0.0044520635\) |

\(T_8\) | \(0.6640625\) | \(-0.0026041667\) | \(0.5022708502\) | \(0.0022708502\) |

\(M_8\) | \(0.66796875\) | \(0.0013020833\) | \(0.4988674899\) | \(-0.0011325101\) |

For a given function \(f\) and interval \([a,b]\text{,}\) \(E_{T,4} = T_4 - \int_a^b f(x) \,dx\) calculates the difference between the approximation generated by the Trapezoid Rule with \(n = 4\) and the exact value of the definite integral. If we look at not only \(E_{T,4}\text{,}\) but also the other errors generated by using \(T_n\) and \(M_n\) with \(n = 4\) and \(n = 8\) in the two examples noted in Table \(\PageIndex{5}\), we see an evident pattern. Not only is the sign of the error (which measures whether the rule generates an over- or under-estimate) tied to the rule used and the function's concavity, but the magnitude of the errors generated by \(T_n\) and \(M_n\) seems closely connected. In particular, the errors generated by the Midpoint Rule seem to be about half the size (in absolute value) of those generated by the Trapezoid Rule.

That is, we can observe in both examples that \(E_{M,4} \approx -\frac{1}{2} E_{T,4}\) and \(E_{M,8} \approx -\frac{1}{2}E_{T,8}\text{.}\) This property of the Midpoint and Trapezoid Rules turns out to hold in general: for a function of consistent concavity, the error in the Midpoint Rule has the opposite sign and approximately half the magnitude of the error of the Trapezoid Rule. Written symbolically,

This important relationship suggests a way to combine the Midpoint and Trapezoid Rules to create an even more accurate approximation to a definite integral.

## Simpson's Rule

When we first developed the Trapezoid Rule, we observed that it is an average of the Left and Right Riemann sums:

If a function is always increasing or always decreasing on the interval \([a,b]\text{,}\) one of \(L_n\) and \(R_n\) will over-estimate the true value of \(\int_a^b f(x) \, dx\text{,}\) while the other will under-estimate the integral. Thus, the errors that result from \(L_n\) and \(R_n\) will have opposite signs; so averaging \(L_n\) and \(R_n\) eliminates a considerable amount of the error present in the respective approximations. In a similar way, it makes sense to think about averaging \(M_n\) and \(T_n\) in order to generate a still more accurate approximation.

We've just observed that \(M_n\) is typically about twice as accurate as \(T_n\text{.}\) So we use the weighted average

The rule for \(S_{2n}\) giving by Equation \ref{CqH} is usually known as *Simpson's Rule*.^{ 2 } Note that we use “\(S_{2n}\)” rather that “\(S_n\)” since the \(n\) points the Midpoint Rule uses are different from the \(n\) points the Trapezoid Rule uses, and thus Simpson's Rule is using \(2n\) points at which to evaluate the function. We build upon the results in Table \(\PageIndex{5}\) to see the approximations generated by Simpson's Rule. In particular, in Table \(\PageIndex{6}\), we include all of the results in Table \(\PageIndex{5}\), but include additional results for \(S_8 = \frac{2M_4 + T_4}{3}\) and \(S_{16} = \frac{2M_8 + T_8}{3}\text{.}\)

Rule | \(\int_0^1 (1-x^2) \,dx = 0.\overline{6}\) | error | \(\int_1^2 \frac{1}{x^2} \, dx = 0.5\) | error |

\(T_4\) | \(0.65625\) | \(-0.0104166667\) | \(0.5089937642\) | \(0.0089937642\) |

\(M_4\) | \(0.671875\) | \(0.0052083333\) | \(0.4955479365\) | \(-0.0044520635\) |

\(S_8\) | \(0.6666666667\) | \(0\) | \(0.5000298792\) | \(0.0000298792\) |

\(T_8\) | \(0.6640625\) | \(-0.0026041667\) | \(0.5022708502\) | \(0.0022708502\) |

\(M_8\) | \(0.66796875\) | \(0.0013020833\) | \(0.4988674899\) | \(-0.0011325101\) |

\(S_{16}\) | \(0.6666666667\) | \(0\) | \(0.5000019434\) | \(0.0000019434\) |

The results seen in Table \(\PageIndex{6}\) are striking. If we consider the \(S_{16}\) approximation of \(\int_1^2 \frac{1}{x^2} \, dx\text{,}\) the error is only \(E_{S,16} = 0.0000019434\text{.}\) By contrast, \(L_8 = 0.5491458502\text{,}\) so the error of that estimate is \(E_{L,8} = 0.0491458502\text{.}\) Moreover, we observe that generating the approximations for Simpson's Rule is almost no additional work: once we have \(L_n\text{,}\) \(R_n\text{,}\) and \(M_n\) for a given value of \(n\text{,}\) it is a simple exercise to generate \(T_n\text{,}\) and from there to calculate \(S_{2n}\text{.}\) Finally, note that the error in the Simpson's Rule approximations of \(\int_0^1 (1-x^2) \, dx\) is zero!^{ 3 }

These rules are not only useful for approximating definite integrals such as \(\int_0^1 e^{-x^2} \, dx\text{,}\) for which we cannot find an elementary antiderivative of \(e^{-x^2}\text{,}\) but also for approximating definite integrals when we are given a function through a table of data.

A car traveling along a straight road is braking and its velocity is measured at several different points in time, as given in the following table. Assume that \(v\) is continuous, always decreasing, and always decreasing at a decreasing rate, as is suggested by the data.

seconds, \(t\) | Velocity in ft/sec, \(v(t)\) |

\(0\) | \(100\) |

\(0.3\) | \(99\) |

\(0.6\) | \(96\) |

\(0.9\) | \(90\) |

\(1.2\) | \(80\) |

\(1.5\) | \(50\) |

\(1.8\) | \(0\) |

- Plot the given data on the set of axes provided in Figure \(\PageIndex{8}\) with time on the horizontal axis and the velocity on the vertical axis.
- What definite integral will give you the exact distance the car traveled on \([0,1.8]\text{?}\)
- Estimate the total distance traveled on \([0,1.8]\) by computing \(L_3\text{,}\) \(R_3\text{,}\) and \(T_3\text{.}\) Which of these under-estimates the true distance traveled?
- Estimate the total distance traveled on \([0,1.8]\) by computing \(M_3\text{.}\) Is this an over- or under-estimate? Why?
- Using your results from (c) and (d), improve your estimate further by using Simpson's Rule.
- What is your best estimate of the average velocity of the car on \([0,1.8]\text{?}\) Why? What are the units on this quantity?

## Overall observations regarding \(L_n\text{,}\) \(R_n\text{,}\) \(T_n\text{,}\) \(M_n\text{,}\) and \(S_{2n}\text{.}\)

As we conclude our discussion of numerical approximation of definite integrals, it is important to summarize general trends in how the various rules over- or under-estimate the true value of a definite integral, and by how much. To revisit some past observations and see some new ones, we consider the following activity.

Consider the functions \(f(x) = 2-x^2\text{,}\) \(g(x) = 2-x^3\text{,}\) and \(h(x) = 2-x^4\text{,}\) all on the interval \([0,1]\text{.}\) For each of the questions that require a numerical answer in what follows, write your answer exactly in fraction form.

- On the three sets of axes provided in Figure \(\PageIndex{9}\), sketch a graph of each function on the interval \([0,1]\text{,}\) and compute \(L_1\) and \(R_1\) for each. What do you observe?
- Compute \(M_1\) for each function to approximate \(\int_0^1 f(x) \,dx\text{,}\) \(\int_0^1 g(x) \,dx\text{,}\) and \(\int_0^1 h(x) \,dx\text{,}\) respectively.
- Compute \(T_1\) for each of the three functions, and hence compute \(S_2\) for each of the three functions.
- Evaluate each of the integrals \(\int_0^1 f(x) \,dx\text{,}\) \(\int_0^1 g(x) \,dx\text{,}\) and \(\int_0^1 h(x) \,dx\) exactly using the First FTC.
- For each of the three functions \(f\text{,}\) \(g\text{,}\) and \(h\text{,}\) compare the results of \(L_1\text{,}\) \(R_1\text{,}\) \(M_1\text{,}\) \(T_1\text{,}\) and \(S_2\) to the true value of the corresponding definite integral. What patterns do you observe?

The results seen in Activity \(\PageIndex{4}\) generalize nicely. For instance, if \(f\) is decreasing on \([a,b]\text{,}\) \(L_n\) will over-estimate the exact value of \(\int_a^b f(x) \,dx\text{,}\) and if \(f\) is concave down on \([a,b]\text{,}\) \(M_n\) will over-estimate the exact value of the integral. An excellent exercise is to write a collection of scenarios of possible function behavior, and then categorize whether each of \(L_n\text{,}\) \(R_n\text{,}\) \(T_n\text{,}\) and \(M_n\) is an over- or under-estimate.

Finally, we make two important notes about Simpson's Rule. When T. Simpson first developed this rule, his idea was to replace the function \(f\) on a given interval with a quadratic function that shared three values with the function \(f\text{.}\) In so doing, he guaranteed that this new approximation rule would be exact for the definite integral of any quadratic polynomial. In one of the pleasant surprises of numerical analysis, it turns out that even though it was designed to be exact for quadratic polynomials, Simpson's Rule is exact for any cubic polynomial: that is, if we are interested in an integral such as \(\int_2^5 (5x^3 - 2x^2 + 7x - 4)\, dx\text{,}\) \(S_{2n}\) will always be exact, regardless of the value of \(n\text{.}\) This is just one more piece of evidence that shows how effective Simpson's Rule is as an approximation tool for estimating definite integrals.^{ 4 }

One reason that Simpson's Rule is so effective is that \(S_{2n}\) benefits from using \(2n+1\) points of data. Because it combines \(M_n\text{,}\) which uses \(n\) midpoints, and \(T_n\text{,}\) which uses the \(n+1\) endpoints of the chosen subintervals, \(S_{2n}\) takes advantage of the maximum amount of information we have when we know function values at the endpoints and midpoints of \(n\) subintervals.

## Summary

- For a definite integral such as \(\int_0^1 e^{-x^2} \, dx\) when we cannot use the First Fundamental Theorem of Calculus because the integrand lacks an elementary algebraic antiderivative, we can estimate the integral's value by using a sequence of Riemann sum approximations. Typically, we start by computing \(L_n\text{,}\) \(R_n\text{,}\) and \(M_n\) for one or more chosen values of \(n\text{.}\)
- The Trapezoid Rule, which estimates \(\int_a^b f(x) \, dx\) by using trapezoids, rather than rectangles, can also be viewed as the average of Left and Right Riemann sums. That is, \(T_n = \frac{1}{2}(L_n + R_n)\text{.}\)
- The Midpoint Rule is typically twice as accurate as the Trapezoid Rule, and the signs of the respective errors of these rules are opposites. Hence, by taking the weighted average \(S_{2n} = \frac{2M_n + T_n}{3}\text{,}\) we can build a much more accurate approximation to \(\int_a^b f(x) \, dx\) by using approximations we have already computed. The rule for \(S_{2n}\) is known as Simpson's Rule, which can also be developed by approximating a given continuous function with pieces of quadratic polynomials.