# 1.11: Numerical Integration

- Page ID
- 91715

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)By now the reader will have come to appreciate that integration is generally quite a bit more difficult than differentiation. There are a great many simple-looking integrals, such as \(\int e^{-x^2}\, d{x}\text{,}\) that are either very difficult or even impossible to express in terms of standard functions^{ 1}. Such integrals are not merely mathematical curiosities, but arise very naturally in many contexts. For example, the error function

\begin{gather*} \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\, d{t} \end{gather*}

is extremely important in many areas of mathematics, and also in many practical applications of statistics.

In such applications we need to be able to evaluate this integral (and many others) at a given numerical value of \(x\text{.}\) In this section we turn to the problem of how to find (approximate) numerical values for integrals, without having to evaluate them algebraically. To develop these methods we return to Riemann sums and our geometric interpretation of the definite integral as the signed area.

We start by describing (and applying) three simple algorithms for generating, numerically, approximate values for the definite integral \(\int_a^b f(x)\,\, d{x}\text{.}\) In each algorithm, we begin in much the same way as we approached Riemann sums.

- We first select an integer \(n \gt 0\text{,}\) called the “number of steps”.
- We then divide the interval of integration, \(a\le x\le b\text{,}\) into \(n\) equal subintervals, each of length \(\Delta x=\frac{b-a}{n}\text{.}\) The first subinterval runs from \(x_0=a\) to \(x_1=a+\Delta x\text{.}\) The second runs from \(x_1\) to \(x_2=a+2\Delta x\text{,}\) and so on. The last runs from \(x_{n-1}=b-\Delta x\) to \(x_n=b\text{.}\)

This splits the original integral into \(n\) pieces:

\[ \int_a^b f(x)\,\, d{x} =\int_{x_0}^{x_1} f(x)\,\, d{x} +\int_{x_1}^{x_2} f(x)\,\, d{x} +\cdots +\int_{x_{n-1}}^{x_n} f(x)\,\, d{x} \nonumber \]

Each subintegral \(\int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\) is approximated by the area of a simple geometric figure. The three algorithms we consider approximate the area by rectangles, trapezoids and parabolas (respectively).

We will explain these rules in detail below, but we give a brief overview here:

- The midpoint rule approximates each subintegral by the area of a rectangle of height given by the value of the function at the midpoint of the subinterval
\begin{align*} \int_{x_{j-1}}^{x_{j}} f(x) \, d{x} & \approx f\left( \frac{x_{j-1}+x_{j}}{2} \right) \Delta x \end{align*}

This is illustrated in the leftmost figure above. - The trapezoidal rule approximates each subintegral by the area of a trapezoid with vertices at \((x_{j-1},0), (x_{j-1},f(x_{j-1})), (x_{j},f(x_{j})), (x_{j},0)\text{:}\)
\begin{align*} \int_{x_{j-1}}^{x_{j}} f(x) \, d{x} & \approx \frac{1}{2} \left[f(x_{j-1})+f(x_j) \right] \Delta x \end{align*}

The trapezoid is illustrated in the middle figure above. We shall derive the formula for the area shortly. - Simpson's rule approximates two adjacent subintegrals by the area under a parabola that passes through the points \((x_{j-1},f(x_{j-1}))\text{,}\) \((x_{j},f(x_{j}))\) and \((x_{j+1},f(x_{j+1}))\text{:}\)
\begin{align*} \int_{x_{j-1}}^{x_{j+1}} f(x) \, d{x} & \approx \frac{1}{3} \left[f(x_{j-1})+4f(x_j)+f(x_{j+1}) \right] \Delta x \end{align*}

The parabola is illustrated in the right hand figure above. We shall derive the formula for the area shortly.

In what follows we need to refer to the midpoint between \(x_{j-1}\) and \(x_j\) very frequently. To save on writing (and typing) we introduce the notation

\[\begin{align*} \bar x_j &= \frac{1}{2} \left(x_{j-1}+x_j \right). \end{align*}\]

## The midpoint rule

The integral \(\int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\) represents the area between the curve \(y=f(x)\) and the \(x\)-axis with \(x\) running from \(x_{j-1}\) to \(x_j\text{.}\) The width of this region is \(x_j-x_{j-1}=\Delta x\text{.}\) The height varies over the different values that \(f(x)\) takes as \(x\) runs from \(x_{j-1}\) to \(x_j\text{.}\)

The midpoint rule approximates this area by the area of a rectangle of width \(x_j-x_{j-1}=\Delta x\) and height \(f(\bar x_j)\) which is the exact height at the midpoint of the range covered by \(x\text{.}\)

The area of the approximating rectangle is \(f(\bar x_j)\Delta x\text{,}\) and the midpoint rule approximates each subintegral by

\[ \int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\approx f(\bar x_j)\Delta x\text{.} \nonumber \]

Applying this approximation to each subinterval and summing gives us the following approximation of the full integral:

\begin{align*} \int_a^b f(x)\,\, d{x}&= \int_{x_0}^{x_1}\!\! f(x)\,\, d{x} + \int_{x_1}^{x_2}\!\! f(x)\,\, d{x} +\cdots + \int_{x_{n-1}}^{x_n} f(x)\,\, d{x}\\ &\approx f(\bar x_1)\Delta x + f(\bar x_2)\Delta x + \cdots + f(\bar x_n)\Delta x \end{align*}

So notice that the approximation is the sum of the function evaluated at the midpoint of each interval and then multiplied by \(\Delta x\text{.}\) Our other approximations will have similar forms.

In summary:

The midpoint rule approximation is

\begin{gather*} \int_a^b f(x)\,\, d{x}\approx\Big[f(\bar x_1)+f(\bar x_2)+\cdots +f(\bar x_n)\Big]\Delta x \end{gather*}

where \(\Delta x = \tfrac{b-a}{n}\) and

\begin{align*} x_0&=a& x_1&=a+\Delta x& x_2&=a+2\Delta x& &\cdots& x_{n-1}&=b-\Delta x& x_n&=b\\ & & \bar x_1&=\tfrac{x_0+x_1}{2}& \bar x_2&=\tfrac{x_1+x_2}{2}& &\cdots& \bar x_{n-1}&=\tfrac{x_{n-2}+x_{n-1}}{2}& \bar x_n&=\tfrac{x_{n-1}+x_n}{2} \end{align*}

We approximate the above integral using the midpoint rule with \(n=8\) step.

**Solution**

- First we set up all the \(x\)-values that we will need. Note that \(a=0\text{,}\) \(b=1\text{,}\) \(\Delta x=\tfrac{1}{8}\) and
\begin{align*} x_0&=0 & x_1&=\tfrac{1}{8} & x_2&=\tfrac{2}{8} && \cdots & x_7&=\tfrac{7}{8}& x_8&=\tfrac{8}{8}=1 \end{align*}

Consequently\begin{align*} \bar x_1&= \tfrac{1}{16} & \bar x_2&= \tfrac{3}{16} & \bar x_3&= \tfrac{5}{16} & \cdots&& \bar x_8 &= \tfrac{15}{16} \end{align*}

- We now apply Equation 1.11.2 to the integrand \(f(x)=\frac{4}{1+x^2}\text{:}\)
\begin{align*} &\int_0^1 \frac{4}{1+x^2}\,\, d{x} \approx \bigg[\overbrace{\frac{4}{1+\bar x_1^2}}^{f(\bar x_1)} +\overbrace{\frac{4}{1+\bar x_2^2}}^{f(\bar x_2)} +\!\cdots\! +\overbrace{\frac{4}{1+\bar x_7^2}}^{f(\bar x_{n-1})} +\overbrace{\frac{4}{1+\bar x_8^2}}^{f(\bar x_n)} \bigg]\Delta x\\ &=\bigg[\frac{4}{1+\tfrac{1}{16^2}}+ \frac{4}{1+\tfrac{3^2}{16^2}}+ \frac{4}{1+\tfrac{5^2}{16^2}}+ \frac{4}{1+\tfrac{7^2}{16^2}} +\frac{4}{1+\tfrac{9^2}{16^2}}\\ &\hskip2in +\frac{4}{1+\tfrac{11^2}{16^2}}+ \frac{4}{1+\tfrac{13^2}{16^2}}+ \frac{4}{1+\tfrac{15^2}{16^2}}\bigg]\frac{1}{8}\\ &=\big[ 3.98444 + 3.86415 + 3.64413 + 3.35738 + 3.03858 +\\ &\hskip2in 2.71618 + 2.40941 + 2.12890 \big]\frac{1}{8}\\ &= 3.1429 \end{align*}

where we have rounded to four decimal places. - In this case we can compute the integral exactly (which is one of the reasons it was chosen as a first example):
\begin{gather*} \int_0^1\frac{4}{1+x^2}\, d{x} =4\arctan x\Big|_0^1 = \pi \end{gather*}

- So the error in the approximation generated by eight steps of the midpoint rule is
\begin{align*} |3.1429-\pi| &=0.0013 \end{align*}

- The relative error is then
\begin{align*} \frac{|\text{approximate}-\text{exact}|}{\text{exact}} &= \frac{|3.1429-\pi|}{\pi}=0.0004 \end{align*}

That is the error is \(0.0004\) times the actual value of the integral. - We can write this as a percentage error by multiplying it by 100
\begin{align*} \text{percentage error} &= 100 \times \frac{|\text{approximate}-\text{exact}|}{\text{exact}} = 0.04 \% \end{align*}

That is, the error is about \(0.04\%\) of the exact value.

The midpoint rule gives us quite good estimates of the integral without too much work — though it is perhaps a little tedious to do by hand ^{2}. Of course, it would be very helpful to quantify what we mean by “good” in this context and that requires us to discuss errors.

Suppose that \(\alpha\) is an approximation to \(A\text{.}\) This approximation has

- absolute error \(|A-\alpha|\) and
- relative error \(\frac{|A-\alpha|}{A}\) and
- percentage error \(100\frac{|A-\alpha|}{A}\)

We will discuss errors further in Section 1.11.4 below.

As a second example, we apply the midpoint rule with \(n=8\) steps to the above integral.

- We again start by setting up all the \(x\)-values that we will need. So \(a=0\text{,}\) \(b=\pi\text{,}\) \(\Delta x=\tfrac{\pi}{8}\) and
\begin{align*} x_0&=0& x_1&=\tfrac{\pi}{8}& x_2&=\tfrac{2\pi}{8}& \cdots&& x_7&=\tfrac{7\pi}{8}& x_8&=\tfrac{8\pi}{8}=\pi \end{align*}

Consequently,\begin{align*} \bar x_1&=\tfrac{\pi}{16}& \bar x_2&=\tfrac{3\pi}{16} & \cdots&& \bar x_7&=\tfrac{13\pi}{16} & \bar x_8&=\tfrac{15\pi}{16} \end{align*}

- Now apply Equation 1.11.2 to the integrand \(f(x)=\sin x\text{:}\)
\begin{align*} &\int_0^\pi\sin x\,\, d{x} \approx\Big[\sin(\bar x_1)+\sin(\bar x_2)+\cdots+\sin(\bar x_8)\Big]\Delta x\\ &=\Big[\sin(\tfrac{\pi}{16})+ \sin(\tfrac{3\pi}{16})+ \sin(\tfrac{5\pi}{16})+ \sin(\tfrac{7\pi}{16})+ \sin(\tfrac{9\pi}{16})+\\ &\hskip2in \sin(\tfrac{11\pi}{16})+ \sin(\tfrac{13\pi}{16})+ \sin(\tfrac{15\pi}{16})\Big]\tfrac{\pi}{8}\\ &=\Big[0.1951+ 0.5556+ 0.8315+ 0.9808+ 0.9808+\\ &\hskip2in 0.8315+ 0.5556+ 0.1951\Big]\times 0.3927\\ &=5.1260\times 0.3927 =2.013 \end{align*}

- Again, we have chosen this example so that we can compare it against the exact value:
\begin{align*} \int_0^\pi \sin x \, d{x} &= \big[ -\cos x \big]_0^\pi = -\cos\pi + \cos 0 = 2. \end{align*}

- So with eight steps of the midpoint rule we achieved
\begin{align*} \text{absolute error} &= |2.013-2|=0.013\\ \text{relative error} &= \frac{|2.013-2|}{2} = 0.0065\\ \text{percentage error} &= 100 \times \frac{|2.013-2|}{2} = 0.65 \% \end{align*}

With little work we have managed to estimate the integral to within \(1\%\) of its true value.

## The trapezoidal rule

Consider again the area represented by the integral \(\int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\text{.}\) The trapezoidal rule^{ 3} (unsurprisingly) approximates this area by a trapezoid^{ 4} whose vertices lie at

\begin{gather*} (x_{j-1},0), (x_{j-1},f(x_{j-1})), (x_{j},f(x_{j})) \text{ and } (x_{j},0). \end{gather*}

The trapezoidal approximation of the integral \(\int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\) is the shaded region in the figure on the right above. It has width \(x_j-x_{j-1}=\Delta x\text{.}\) Its left hand side has height \(f(x_{j-1})\) and its right hand side has height \(f(x_j)\text{.}\)

As the figure below shows, the area of a trapezoid is its width times its average height.

So the trapezoidal rule approximates each subintegral by

\[ \int_{x_{j-1}}^{x_j} f(x)\,\, d{x}\approx \tfrac{f(x_{j-1})+f(x_j)}{2}\Delta x \nonumber \]

Applying this approximation to each subinterval and then summing the result gives us the following approximation of the full integral

\begin{align*} \int_a^b f(x)\,\, d{x}&= \int_{x_0}^{x_1} f(x)\,\, d{x} + \int_{x_1}^{x_2} f(x)\,\, d{x} +\cdots+\int_{x_{n-1}}^{x_n} f(x)\,\, d{x}\\ & \approx \tfrac{f(x_0)+f(x_1)}{2}\Delta x + \tfrac{f(x_1)+f(x_2)}{2}\Delta x + \cdots + \tfrac{f(x_{n-1})+f(x_n)}{2}\Delta x\\ &= \Big[\frac{1}{2} f(x_0)+f(x_1)+f(x_2)+\cdots+ f(x_{n-1})+\frac{1}{2} f(x_n)\Big]\Delta x \end{align*}

So notice that the approximation has a very similar form to the midpoint rule, excepting that

- we evaluate the function at the \(x_j\)'s rather than at the midpoints, and
- we multiply the value of the function at the endpoints \(x_0,x_n\) by \(\frac12\text{.}\)

In summary:

The trapezoidal rule approximation is

\begin{align*} \int_a^b f(x)\,\, d{x} &\approx\Big[\frac{1}{2} f(x_0)+f(x_1)+f(x_2)+\cdots+ f(x_{n-1})+\frac{1}{2} f(x_n)\Big]\Delta x \end{align*}

where

\begin{gather*} \Delta x = \tfrac{b-a}{n},\quad x_0=a,\quad x_1=a+\Delta x,\quad x_2=a+2\Delta x,\quad \cdots,\quad x_{n-1}=b-\Delta x,\quad x_n=b \end{gather*}

To compare and contrast we apply the trapezoidal rule to the examples we did above with the midpoint rule.

**Solution**

We proceed very similarly to Example 1.11.3 and again use \(n=8\) steps.

- We again have \(f(x)=\frac{4}{1+x^2}\text{,}\) \(a=0\text{,}\) \(b=1\text{,}\) \(\Delta x=\tfrac{1}{8}\) and
\begin{align*} x_0&=0 & x_1&=\tfrac{1}{8} & x_2&=\tfrac{2}{8} && \cdots & x_7&=\tfrac{7}{8}& x_8&=\tfrac{8}{8}=1 \end{align*}

- Applying the trapezoidal rule, Equation 1.11.6, gives
\begin{align*} &\int_0^1 \frac{4}{1+x^2}\,\, d{x} \approx \bigg[\frac{1}{2}\overbrace{\frac{4}{1\!+\!x_0^2}}^{f(x_0)} +\overbrace{\frac{4}{1\!+\!x_1^2}}^{f(x_1)} +\!\cdots\! +\overbrace{\frac{4}{1\!+\!x_7^2}}^{f(x_{n-1})} +\frac{1}{2}\overbrace{\frac{4}{1\!+\!x_8^2}}^{f(x_n)} \bigg]\Delta x\\ &\hskip0.25in=\bigg[\frac{1}{2}\frac{4}{1+0^2}+ \frac{4}{1+\tfrac{1}{8^2}}+ \frac{4}{1+\tfrac{2^2}{8^2}}+ \frac{4}{1+\tfrac{3^2}{8^2}}\\ &\hskip0.5in +\frac{4}{1+\tfrac{4^2}{8^2}}+ \frac{4}{1+\tfrac{5^2}{8^2}}+ \frac{4}{1+\tfrac{6^2}{8^2}}+ \frac{4}{1+\tfrac{7^2}{8^2}}+ \frac{1}{2}\frac{4}{1+\tfrac{8^2}{8^2}}\bigg]\frac{1}{8}\\ &\hskip0.25in=\Big[\frac{1}{2}\times 4+ 3.939+ 3.765+ 3.507\\ &\hskip0.5in +3.2+ 2.876+ 2.56+ 2.266+ \frac{1}{2}\times 2\Big]\frac{1}{8}\\ &\hskip0.25in =3.139 \end{align*}

to three decimal places. - The exact value of the integral is still \(\pi\text{.}\) So the error in the approximation generated by eight steps of the trapezoidal rule is \(|3.139-\pi|=0.0026\text{,}\) which is \(100\tfrac{|3.139-\pi|}{\pi}\% =0.08\%\) of the exact answer. Notice that this is roughly twice the error that we achieved using the midpoint rule in Example 1.11.3.

Let us also redo Example 1.11.5 using the trapezoidal rule.

**Solution**

We proceed very similarly to Example 1.11.5 and again use \(n=8\) steps.

- We again have \(a=0\text{,}\) \(b=\pi\text{,}\) \(\Delta x=\tfrac{\pi}{8}\) and
\begin{align*} x_0&=0& x_1&=\tfrac{\pi}{8}& x_2&=\tfrac{2\pi}{8}& \cdots&& x_7&=\tfrac{7\pi}{8}& x_8&=\tfrac{8\pi}{8}=\pi \end{align*}

- Applying the trapezoidal rule, Equation 1.11.6, gives
\begin{align*} &\int_0^\pi\sin x\,\, d{x} \approx \Big[\frac{1}{2}\sin(x_0)+\sin(x_1)+\cdots+\sin(x_7)+\frac{1}{2}\sin(x_8)\Big]\Delta x\\ &=\Big[\frac{1}{2}\sin0 + \sin\tfrac{\pi}{8}+ \sin\tfrac{2\pi}{8}+ \sin\tfrac{3\pi}{8}+ \sin\tfrac{4\pi}{8}+ \sin\tfrac{5\pi}{8}\\ &\hskip0.5in+\sin\tfrac{6\pi}{8}+ \sin\tfrac{7\pi}{8}+ \frac{1}{2}\sin\tfrac{8\pi}{8}\Big]\tfrac{\pi}{8}\\ &=\Big[\frac{1}{2}\!\times\! 0+ 0.3827+ 0.7071+ 0.9239+ 1.0000+ 0.9239+\\ &\hskip0.5in 0.7071+ 0.3827+ \frac{1}{2}\!\times\! 0\Big]\times 0.3927\\ &=5.0274\times 0.3927 =1.974 \end{align*}

- The exact answer is \(\int_0^\pi\sin x\,\, d{x}=-\cos x\Big|_0^\pi=2\text{.}\) So with eight steps of the trapezoidal rule we achieved \(100\tfrac{|1.974-2|}{2}=1.3\%\) accuracy. Again this is approximately twice the error we achieved in Example 1.11.5 using the midpoint rule.

These two examples suggest that the midpoint rule is more accurate than the trapezoidal rule. Indeed, this observation is born out by a rigorous analysis of the error — see Section 1.11.4.

## Simpson's Rule

When we use the trapezoidal rule we approximate the area \(\int_{x_{j-1}}^{x_j}f(x)\, d{x}\) by the area between the \(x\)-axis and a straight line that runs from \((x_{j-1},f(x_{j-1}))\) to \((x_j, f(x_j))\) — that is, we approximate the function \(f(x)\) on this interval by a linear function that agrees with the function at each endpoint. An obvious way to extend this — just as we did when extending linear approximations to quadratic approximations in our differential calculus course — is to approximate the function with a quadratic. This is precisely what Simpson's^{ 5} rule does.

Simpson's rule approximates the integral over two neighbouring subintervals by the area between a parabola and the \(x\)-axis. In order to describe this parabola we need 3 distinct points (which is why we approximate two subintegrals at a time). That is, we approximate

\begin{align*} \int_{x_0}^{x_1} f(x)\,\, d{x} + \int_{x_1}^{x_2} f(x)\,\, d{x} &= \int_{x_0}^{x_2} f(x)\,\, d{x} \end{align*}

by the area bounded by the parabola that passes through the three points \(\big(x_0,f(x_0)\big)\text{,}\) \(\big(x_1,f(x_1)\big)\) and \(\big(x_2,f(x_2)\big)\text{,}\) the \(x\)-axis and the vertical lines \(x=x_0\) and \(x=x_2\text{.}\)

We repeat this on the next pair of subintervals and approximate \(\int_{x_2}^{x_4} f(x)\,\, d{x}\) by the area between the \(x\)-axis and the part of a parabola with \(x_2\le x\le x_4\text{.}\) This parabola passes through the three points \(\big(x_2,f(x_2)\big)\text{,}\) \(\big(x_3,f(x_3)\big)\) and \(\big(x_4,f(x_4)\big)\text{.}\) And so on. Because Simpson's rule does the approximation two slices at a time, \(n\) *must be even*.

To derive Simpson's rule formula, we first find the equation of the parabola that passes through the three points \(\big(x_0,f(x_0)\big)\text{,}\) \(\big(x_1,f(x_1)\big)\) and \(\big(x_2,f(x_2)\big)\text{.}\) Then we find the area between the \(x\)-axis and the part of that parabola with \(x_0\le x\le x_2\text{.}\) To simplify this computation consider a parabola passing through the points \((-h,y_{-1}), (0,y_0)\) and \((h,y_1)\text{.}\)

Write the equation of the parabola as

\begin{align*} y &= Ax^2 + Bx +C \end{align*}

Then the area between it and the \(x\)-axis with \(x\) running from \(-h\) to \(h\) is

\begin{align*} \int_{-h}^h \big[ Ax^2 + Bx +C \big] \, d{x} &= \left[ \frac{A}{3}x^3 + \frac{B}{2}x^2 + Cx \right]_{-h}^h\\ &= \frac{2A}{3}h^3 + 2Ch & \text{it is helpful to write it as}\\ &= \frac{h}{3}\left( 2Ah^2 + 6C \right) \end{align*}

Now, the the three points \((-h,y_{-1}), (0,y_0)\) and \((h,y_1)\) lie on this parabola if and only if

\begin{align*} A h^2 - Bh + C &= y_{-1} & \text{at $(-h,y_{-1})$}\\ C &= y_{0} & \text{at $(0,y_{0})$}\\ A h^2 + Bh + C &= y_{1} & \text{at $(h,y_{1})$} \end{align*}

Adding the first and third equations together gives us

\begin{align*} 2Ah^2 + (B-B)h + 2C &= y_{-1}+y_{1} \end{align*}

To this we add four times the middle equation

\begin{align*} 2Ah^2 + 6C &= y_{-1}+4y_0+y_1. \end{align*}

This means that

\begin{align*} \text{area} &= \int_{-h}^h \big[ Ax^2 + Bx +C \big] \, d{x} = \frac{h}{3}\left( 2Ah^2 + 6C \right)\\ &= \frac{h}{3}\left(y_{-1}+4y_0+y_1\right) \end{align*}

Note that here

- \(h\) is one half of the length of the \(x\)-interval under consideration
- \(y_{-1}\) is the height of the parabola at the left hand end of the interval under consideration
- \(y_0\) is the height of the parabola at the middle point of the interval under consideration
- \(y_{1}\) is the height of the parabola at the right hand end of the interval under consideration

So Simpson's rule approximates

\[ \int_{x_0}^{x_2} f(x)\,\, d{x} \approx \tfrac{1}{3}\Delta x\big[f(x_0)+4f(x_1)+f(x_2)\big] \nonumber \]

and

\[ \int_{x_2}^{x_4} f(x)\,\, d{x} \approx \tfrac{1}{3}\Delta x\big[f(x_2)+4f(x_3)+f(x_4)\big] \nonumber \]

and so on. Summing these all together gives:

\begin{align*} \int_a^b& f(x)\,\, d{x} =\int_{x_0}^{x_2} f(x)\,\, d{x} +\int_{x_2}^{x_4} f(x)\,\, d{x} +\int_{x_4}^{x_6} f(x)\,\, d{x} +\cdots +\int_{x_{n-2}}^{x_n} f(x)\,\, d{x}\\ &\approx\,\tfrac{\Delta x}{3}\big[f(x_0)+4f(x_1)+f(x_2)\big] +\,\tfrac{\Delta x}{3}\big[f(x_2)+4f(x_3)+f(x_4)\big] \cr &\ \ \ +\,\tfrac{\Delta x}{3}\big[f(x_4)+4f(x_5)+f(x_6)\big] +\,\cdots\ +\,\tfrac{\Delta x}{3}\big[f(x_{n-2})+4f(x_{n-1})+f(x_n)\big]\\ &=\Big[f(x_0)\!+4f(x_1)\!+2f(x_2)\!+4f(x_3)\!+2f(x_4)\! +\cdots+ 2f(x_{n-2})\!+4f(x_{n-1})\!+ f(x_n)\Big]\tfrac{\Delta x}{3} \end{align*}

In summary

The Simpson's rule approximation is

\begin{align*} \int_a^b f(x)\,\, d{x} &\approx\Big[f(x_0)\!+4f(x_1)\!+2f(x_2)\!+4f(x_3)\!+2f(x_4)\! +\cdots\\ &\hskip2in \cdots + 2f(x_{n-2})\!+4f(x_{n-1})\!+ f(x_n)\Big]\tfrac{\Delta x}{3} \end{align*}

where \(n\) is even and

\begin{gather*} \Delta x = \tfrac{b-a}{n},\quad x_0=a,\quad x_1=a+\Delta x,\quad x_2=a+2\Delta x,\quad \cdots,\quad x_{n-1}=b-\Delta x,\quad x_n=b \end{gather*}

Notice that Simpson's rule requires essentially no more work than the trapezoidal rule. In both rules we must evaluate \(f(x)\) at \(x=x_0,x_1,\cdots,x_n\text{,}\) but we add those terms multiplied by different constants ^{6}.

Let's put it to work on our two running examples.

**Solution**

We proceed almost identically to Example 1.11.7 and again use \(n=8\) steps.

- We have the same \(\Delta,a,b,x_0,\cdots, x_n\) as Example 1.11.7.
- Applying Equation 1.11.9 gives
\begin{align*} &\int_0^1 \frac{4}{1+x^2}\,\, d{x}\\ &\approx \bigg[\frac{4}{1+0^2}+ 4\frac{4}{1+\tfrac{1}{8^2}}+ 2\frac{4}{1+\tfrac{2^2}{8^2}}+ 4\frac{4}{1+\tfrac{3^2}{8^2}}+ 2\frac{4}{1+\tfrac{4^2}{8^2}}\\ &\hskip0.5in +4\frac{4}{1+\tfrac{5^2}{8^2}}+ 2\frac{4}{1+\tfrac{6^2}{8^2}}+ 4\frac{4}{1+\tfrac{7^2}{8^2}}+ \frac{4}{1+\tfrac{8^2}{8^2}}\bigg]\frac{1}{8\times 3}\\ &=\Big[ 4\!+\! 4\times 3.938461538 \!+\! 2\times 3.764705882 \!+\! 4\times 3.506849315 \!+\! 2\times 3.2\\ &\hskip0.5in +4\times 2.876404494 + 2\times 2.56 + 4\times 2.265486726 + 2\Big]\frac{1}{8\times 3}\\ &=3.14159250 \end{align*}

to eight decimal places. - This agrees with \(\pi\) (the exact value of the integral) to six decimal places. So the error in the approximation generated by eight steps of Simpson's rule is \(|3.14159250-\pi|=1.5\times 10^{-7}\text{,}\) which is \(100\tfrac{|3.14159250-\pi|}{\pi}\% =5\times 10^{-6}\%\) of the exact answer.

It is striking that the absolute error approximating with Simpson's rule is so much smaller than the error from the midpoint and trapezoidal rules.

\begin{align*} &\text{midpoint error}\hskip-0.35in &&= 0.0013\\ &\text{trapezoid error}\hskip-0.35in &&= 0.0026\\ &\text{Simpson error}\hskip-0.35in &&= 0.00000015 \end{align*}

Buoyed by this success, we will also redo Example 1.11.8 using Simpson's rule.

**Solution**

We proceed almost identically to Example 1.11.8 and again use \(n=8\) steps.

- We have the same \(\Delta,a,b,x_0,\cdots, x_n\) as Example 1.11.7.
- Applying Equation 1.11.9 gives
\begin{align*} &\int_0^\pi\sin x\,\, d{x}\\ &\hskip0.5in\approx \Big[\sin(x_0)+4\sin(x_1)+2\sin(x_2)+\cdots+4\sin(x_7)+\sin(x_8)\Big]\tfrac{\Delta x}{3}\\ &\hskip0.5in=\Big[\sin(0)+ 4\sin(\tfrac{\pi}{8})+ 2\sin(\tfrac{2\pi}{8})+ 4\sin(\tfrac{3\pi}{8})+ 2\sin(\tfrac{4\pi}{8})\cr &\hskip0.5in\phantom{=\Big[\sin(0)\,} +4\sin(\tfrac{5\pi}{8})+ 2\sin(\tfrac{6\pi}{8})+ 4\sin(\tfrac{7\pi}{8})+ \sin(\tfrac{8\pi}{8})\Big]\tfrac{\pi}{8\times 3}\cr &=\hskip0.5in\Big[ 0+ 4\times 0.382683+ 2\times 0.707107+ 4\times 0.923880+ 2\times 1.0\\ &\hskip0.5in\phantom{=\Big[ 0\,}+ 4\times 0.923880+ 2\times 0.707107+ 4\times 0.382683+0\Big]\tfrac{\pi}{8\times 3}\\ &\hskip0.5in=15.280932\times 0.130900\\ &\hskip0.5in=2.00027 \end{align*}

- With only eight steps of Simpson's rule we achieved \(100\tfrac{2.00027-2}{2}=0.014\%\) accuracy.

Again we contrast the error we achieved with the other two rules:

\begin{align*} &\text{midpoint error}\hskip-0.35in &&= 0.013\\ &\text{trapezoid error}\hskip-0.35in &&= 0.026\\ &\text{Simpson error}\hskip-0.35in &&= 0.00027 \end{align*}

This completes our derivation of the midpoint, trapezoidal and Simpson's rules for approximating the values of definite integrals. So far we have not attempted to see how efficient and how accurate the algorithms are in general. That's our next task.

## Three Simple Numerical Integrators — Error Behaviour

Now we are armed with our three (relatively simple) method for numerical integration we should give thought to how practical they might be in the real world^{ 7}. Two obvious considerations when deciding whether or not a given algorithm is of any practical value are

- the amount of computational effort required to execute the algorithm and
- the accuracy that this computational effort yields.

For algorithms like our simple integrators, the bulk of the computational effort usually goes into evaluating the function \(f(x)\text{.}\) The number of evaluations of \(f(x)\) required for \(n\) steps of the midpoint rule is \(n\text{,}\) while the number required for \(n\) steps of the trapezoidal and Simpson's rules is \(n+1\text{.}\) So all three of our rules require essentially the same amount of effort — one evaluation of \(f(x)\) per step.

To get a first impression of the error behaviour of these methods, we apply them to a problem whose answer we know exactly:

\begin{gather*} \int_0^\pi\sin x\,\, d{x}=-\cos x\big|_0^\pi = 2. \end{gather*}

To be a little more precise, we would like to understand how the errors of the three methods change as we increase the effort we put in (as measured by the number of steps \(n\)). The following table lists the error in the approximate value for this number generated by our three rules applied with three different choices of \(n\text{.}\) It also lists the number of evaluations of \(f\) required to compute the approximation.

Midpoint |
Trapezoidal |
Simpson's |
||||

n | error | # evals | error | # evals | error | # evals |

10 | \(8.2\times 10^{-3}\) | 10 | \(1.6\times 10^{-2}\) | 11 | \(1.1\times 10^{-4}\) | 11 |

100 | \(8.2\times 10^{-5}\) | 100 | \(1.6\times 10^{-4}\) | 101 | \(1.1\times 10^{-8}\) | 101 |

1000 | \(8.2\times 10^{-7}\) | 1000 | \(1.6\times 10^{-6}\) | 1001 | \(1.1\times 10^{-12}\) | 1001 |

Observe that

- Using 101 evaluations of \(f\) worth of Simpson's rule gives an error 75 times smaller than 1000 evaluations of \(f\) worth of the midpoint rule.
- The trapezoidal rule error with \(n\) steps is about twice the midpoint rule error with \(n\) steps.
- With the midpoint rule, increasing the number of steps by a factor of 10 appears to reduce the error by about a factor of \(100=10^2=n^2\text{.}\)
- With the trapezoidal rule, increasing the number of steps by a factor of 10 appears to reduce the error by about a factor of \(10^2=n^2\text{.}\)
- With Simpson's rule, increasing the number of steps by a factor of 10 appears to reduce the error by about a factor of \(10^4=n^4\text{.}\)

So it looks like

\begin{align*} &\hbox{approx value of $\displaystyle \int_a^b f(x)\,\, d{x}$ given by $n$ midpoint steps } \hskip-0.35in&&\approx \int_a^b f(x)\,\, d{x}+K_M\cdot \frac{1}{n^2}\\ &\hbox{approx value of $\displaystyle\int_a^b f(x)\,\, d{x}$ given by $n$ trapezoidal steps } \hskip-0.35in&&\approx \int_a^b f(x)\,\, d{x}+K_T\cdot \frac{1}{n^2}\\ &\hbox{approx value of $\displaystyle\int_a^b f(x)\,\, d{x}$ given by $n$ Simpson's steps } \hskip-0.35in&&\approx \int_a^b f(x)\,\, d{x}+K_M\cdot \frac{1}{n^4} \end{align*}

with some constants \(K_M,\ K_T\) and \(K_S\text{.}\) It also seems that \(K_T\approx 2 K_M\text{.}\)

To test these conjectures for the behaviour of the errors we apply our three rules with about ten different choices of \(n\) of the form \(n=2^m \) with \(m\) integer. Figure 1.11.12 contains two graphs of the results. The left-hand plot shows the results for the midpoint and trapezoidal rules and the right-hand plot shows the results for Simpson's rule.

For each rule we are expecting (based on our conjectures above) that the error

\begin{align*} e_n &= | \text{exact value } - \text{ approximate value}| \end{align*}

with \(n\) steps is (roughly) of the form

\begin{gather*} e_n=K\frac{1}{n^k} \end{gather*}

for some constants \(K\) and \(k\text{.}\) We would like to test if this is really the case, by graphing \(Y=e_n\) against \(X=n\) and seeing if the graph “looks right”. But it is not easy to tell whether or not a given curve really is \(Y=\frac{K}{X^k}\text{,}\) for some specific \(k\text{,}\) by just looking at it. However, your eye is pretty good at determining whether or not a graph is a straight line. Fortunately, there is a little trick that turns the curve \(Y=\tfrac{K}{X^k}\) into a straight line — no matter what \(k\) is.

Instead of plotting \(Y\) against \(X\text{,}\) we plot \(\log Y\) against \(\log X\text{.}\) This transformation^{ 8} works because when \(Y=\frac{K}{X^k}\)

\begin{align*} \log Y &= \log K - k \log X \end{align*}

So plotting \(y=\log Y\) against \(x=\log X\) gives the straight line \(y=\log K -kx\text{,}\) which has slope \(-k\) and \(y\)-intercept \(\log K\text{.}\)

The three graphs in Figure 1.11.12 plot \(y=\log_2 e_n\) against \(x=\log_2 n\) for our three rules. Note that we have chosen to use logarithms^{ 9} with this “unusual base” because it makes it very clear how much the error is improved if we *double* the number of steps used. To be more precise — one unit step along the \(x\)-axis represents changing \(n \mapsto 2n\text{.}\) For example, applying Simpson's rule with \(n=2^4\) steps results in an error of \(0000166\text{,}\) so the point \((x=\log_2 2^4=4, y=\log_2 0000166 = \frac{\log 0000166}{\log 2} = -15.8)\) has been included on the graph. Doubling the effort used — that is, doubling the number of steps to \(n=2^5\)— results in an error of \(0.00000103\text{.}\) So, the data point \((x=\log_2 2^5=5\ ,\ y=\log_2 0.00000103 =\frac{\ln 0.00000103}{\ln 2}=-19.9)\) lies on the graph. Note that the \(x\)-coordinates of these points differ by 1 unit.

For each of the three sets of data points, a straight line has also been plotted “through” the data points. A procedure called linear regression^{ 10} has been used to decide precisely which straight line to plot. It provides a formula for the slope and \(y\)-intercept of the straight line which “best fits” any given set of data points. From the three lines, it sure looks like \(k=2\) for the midpoint and trapezoidal rules and \(k=4\) for Simpson's rule. It also looks like the ratio between the value of \(K\) for the trapezoidal rule, namely \(K=2^{0.7253}\text{,}\) and the value of \(K\) for the midpoint rule, namely \(K=2^{-0.2706}\text{,}\) is pretty close to 2: \(2^{0.7253}/2^{-0.2706}=2^{0.9959}\text{.}\)

The intuition, about the error behaviour, that we have just developed is in fact correct — provided the integrand \(f(x)\) is reasonably smooth. To be more precise

Assume that \(|f''(x)| \leq M\) for all \(a\leq x \leq b\text{.}\) Then

when approximating \(\displaystyle \int_a^b f(x)\, d{x}\text{.}\) Further, if \(|f^{(4)}(x)|\leq L\) for all \(a\leq x \leq b\text{,}\) then

\begin{align*} &\text{the total error introduced by Simpson's rule is bounded by } & \frac{L}{180} \frac{(b-a)^5}{n^4}. \end{align*}The first of these error bounds in proven in the following (optional) section. Here are some examples which illustrate how they are used. First let us check that the above result is consistent with our data in Figure 1.11.12

- The integral \(\int_0^\pi \sin x\,\, d{x}\) has \(b-a=\pi\text{.}\)
- The second derivative of the integrand satisfies
\begin{align*} \left|\frac{d^{2}}{dx^{2}}\sin x\right| &= |-\sin x| \leq 1 \end{align*}

So we take \(M=1\text{.}\) - So the error, \(e_n\text{,}\) introduced when \(n\) steps are used is bounded by
\begin{align*} |e_n|&\le\frac{M}{24}\frac{(b-a)^3}{n^2}\\ &=\frac{\pi^3}{24}\frac{1}{n^2}\\ &\approx 1.29\frac{1}{n^2} \end{align*}

- The data in the graph in Figure 1.11.12 gives
\begin{align*} |e_n| &\approx 2^{-.2706}\frac{1}{n^2}=0.83\frac{1}{n^2} \end{align*}

which is consistent with the bound \(|e_n|\le \frac{\pi^3}{24}\frac{1}{n^2}\text{.}\)

In a typical application we would be asked to evaluate a given integral to some specified accuracy. For example, if you are manufacturer and your machinery can only cut materials to an accuracy of \({\tfrac{1}{10}}^{\rm th}\) of a millimeter, there is no point in making design specifications more accurate than \({\tfrac{1}{10}}^{\rm th}\) of a millimeter.

Suppose, for example, that we wish to use the midpoint rule to evaluate^{ 11}

\begin{gather*} \int_0^1 e^{-x^2}\, d{x} \end{gather*}

to within an accuracy of \(10^{-6}\text{.}\)

**Solution**

- The integral has \(a=0\) and \(b=1\text{.}\)
- The first two derivatives of the integrand are
\begin{align*} \frac{d}{dx}e^{-x^2}&=-2xe^{-x^2} \hskip2in \text{and}\\ \frac{d^{2}}{dx^{2}}e^{-x^2} &=\frac{d}{dx}\big(-2xe^{-x^2}\big) =-2e^{-x^2}+4x^2e^{-x^2}=2(2x^2-1)e^{-x^2} \end{align*}

- As \(x\) runs from 0 to 1, \(2x^2-1\) increases from \(-1\) to \(1\text{,}\) so that
\begin{gather*} 0\le x\le 1\implies |2x^2-1|\le 1,\ e^{-x^2}\le 1\implies \big|2(2x^2-1)e^{-x^2}\big|\le 2 \end{gather*}

So we take \(M=2\text{.}\) - The error introduced by the \(n\) step midpoint rule is at most
\begin{align*} e_n & \leq \frac{M}{24}\frac{(b-a)^3}{n^2}\\ &\leq \frac{2}{24}\frac{(1-0)^3}{n^2} = \frac{1}{12n^2} \end{align*}

- We need this error to be smaller than \(10^{-6}\) so
\begin{align*} e_n & \leq \frac{1}{12n^2} \leq 10^{-6} & \text{and so }\\ 12n^2 &\geq 10^6 & \text{clean up}\\ n^2 &\geq \frac{10^6}{12} = 83333.3 & \text{square root both sides}\\ n &\geq 288.7 \end{align*}

So \(289\) steps of the midpoint rule will do the job. - In fact \(n=289\) results in an error of about \(3.7\times 10^{-7}\text{.}\)

That seems like far too much work, and the trapezoidal rule will have twice the error. So we should look at Simpson's rule.

Suppose now that we wish evaluate \(\int_0^1 e^{-x^2}\,\, d{x}\) to within an accuracy of \(10^{-6}\) — but now using Simpson's rule. How many steps should we use?

**Solution**

- Again we have \(a=0,b=1\text{.}\)
- We then need to bound \(\frac{d^{4}}{dx^{4}}e^{-x^2}\) on the domain of integration, \(0\leq x\leq 1\text{.}\)
\begin{align*} \frac{d^{3}}{dx^{3}}e^{-x^2} &=\frac{d}{dx}\big\{2(2x^2-1)e^{-x^2}\big\} =8xe^{-x^2}-4x(2x^2-1)e^{-x^2}\\ &=4(-2x^3+3x)e^{-x^2}\\ \frac{d^{4}}{dx^{4}}e^{-x^2} &=\frac{d}{dx}\big\{4(-2x^3+3x)e^{-x^2}\big\}\\ &=4(-6x^2+3)e^{-x^2}\hskip-4pt-8x(-2x^3+3x)e^{-x^2}\\ &=4(4x^4-12x^2+3)e^{-x^2} \end{align*}

- Now, for any \(x\text{,}\) \(e^{-x^2}\le 1\text{.}\) Also, for \(0\le x\le 1\text{,}\)
\begin{align*} 0 & \leq x^2, x^4 \leq 1 & \text{ so}\\ 3 & \leq 4x^4+3 \leq 7 & \text{ and }\\ -12 & \leq -12x^2 \leq 0 & \text{adding these together gives}\\ -9 & \leq 4x^4-12x^2 + 3 \leq 7 \end{align*}

Consequently, \(|4x^4-12x^2+3|\) is bounded by \(9\) and so\begin{gather*} \left|\frac{d^{4}}{dx^{4}}e^{-x^2}\right| \leq 4\times 9=36 \end{gather*}

So take \(L=36\text{.}\) - The error introduced by the \(n\) step Simpson's rule is at most
\begin{align*} e_n & \leq \frac{L}{180}\frac{(b-a)^5}{n^4}\\ & \leq \frac{36}{180} \frac{(1-0)^5}{n^4} = \frac{1}{5n^4} \end{align*}

- In order for this error to be no more than \(10^{-6}\) we require \(n\) to satisfy
\begin{align*} e_n &\leq \frac{1}{5n^4} \leq 10^{-6} & \text{and so}\\ 5n^4 & \geq 10^6\\ n^4 &\geq 200000 & \text{take fourth root}\\ n & \geq 21.15 \end{align*}

So \(22\) steps of Simpson's rule will do the job. - \(n=22\) steps actually results in an error of \(3.5\times 10^{-8}\text{.}\) The reason that we get an error so much smaller than we need is that we have overestimated the number of steps required. This, in turn, occurred because we made quite a rough bound of \(\left|\frac{d^{4}}{dx^{4}}f(x)\right|\leq 36\text{.}\) If we are more careful then we will get a slightly smaller \(n\text{.}\) It actually turns out
^{ 12}that you only need \(n=10\) to approximate within \(10^{-6}\text{.}\)

## Optional — An error bound for the midpoint rule

We now try develop some understanding as to why we got the above experimental results. We start with the error generated by a single step of the midpoint rule. That is, the error introduced by the approximation

\[ \int_{x_0}^{x_1}f(x)\,\, d{x}\approx f(\bar x_1)\Delta x \qquad\hbox{ where } \Delta x=x_1-x_0,\ \bar x_1=\tfrac{x_0+x_1}{2} \nonumber \]

To do this we are going to need to apply integration by parts in a sneaky way. Let us start by considering^{ 13} a subinterval \(\alpha \leq x \leq \beta\) and let's call the width of the subinterval \(2q\) so that \(\beta=\alpha+2q\text{.}\) If we were to now apply the midpoint rule to this subinterval, then we would write

\begin{align*} \int_\alpha^\beta f(x) \, d{x} & \approx 2q \cdot f(\alpha+q) = q f(\alpha+q) + q f(\beta-q) \end{align*}

since the interval has width \(2q\) and the midpoint is \(\alpha+q=\beta-q\text{.}\)

The sneaky trick we will employ is to write

\begin{align*} \int_\alpha^\beta f(x) \, d{x} &= \int_\alpha^{\alpha+q}f(x)\, d{x} + \int_{\beta-q}^\beta f(x)\, d{x} \end{align*}

and then examine each of the integrals on the right-hand side (using integration by parts) and show that they are each of the form

\begin{align*} \int_\alpha^{\alpha+q}f(x)\, d{x} & \approx q f(\alpha+q) + \text{small error term}\\ \int_{\beta-q}^\beta f(x)\, d{x} &\approx q f(\beta-q) + \text{small error term} \end{align*}

Let us apply integration by parts to \(\int_\alpha^{\alpha+q} f(x)\, d{x}\) — with \(u=f(x), \, d{v}=\, d{x}\) so \(\, d{u}=f'(x)\, d{x}\) and we will make the slightly non-standard choice of \(v=x-\alpha\text{:}\)

\begin{align*} \int_\alpha^{\alpha+q} f(x)\, d{x} &= \big[ (x-\alpha)f(x)\big]_\alpha^{\alpha+q} - \int_\alpha^{\alpha+q} (x-\alpha) f'(x) \, d{x}\\ &= q f(\alpha+q) - \int_\alpha^{\alpha+q} (x-\alpha) f'(x) \, d{x} \end{align*}

Notice that the first term on the right-hand side is the term we need, and that our non-standard choice of \(v\) allowed us to avoid introducing an \(f(\alpha)\) term.

Now integrate by parts again using \(u=f'(x), \, d{v}=(x-\alpha)\, d{x}\text{,}\) so \(\, d{u}=f''(x), v = \frac{(x-\alpha)^2}{2}\text{:}\)

\begin{align*} \int_\alpha^{\alpha+q} f(x)\, d{x} &= q f(\alpha+q) - \int_\alpha^{\alpha+q} (x-\alpha)f'(x)\, d{x}\\ &= q f(\alpha+q) - \left[ \frac{(x-\alpha)^2}{2} f'(x) \right]_\alpha^{\alpha+q} + \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2}f''(x)\, d{x}\\ &= q f(\alpha+q) - \frac{q^2}{2}f'(\alpha+q) + \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2}f''(x)\, d{x} \end{align*}

To obtain a similar expression for the other integral, we repeat the above steps and obtain:

\begin{align*} \int_{\beta-q}^\beta f(x)\, d{x} &= q f(\beta-q) + \frac{q^2}{2}f'(\beta-q) + \int_{\beta-q}^\beta \frac{(x-\beta)^2}{2}f''(x)\, d{x} \end{align*}

Now add together these two expressions

\[\begin{align*} \int_\alpha^{\alpha+q} f(x)\, d{x} + \int_{\beta-q}^\beta f(x)\, d{x} &= q f(\alpha+q) + q f(\beta-q) + \frac{q^2}{2}\left( f'(\beta-q)-f'(\alpha+q) \right)\\ & + \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2}f''(x)\, d{x} + \int_{\beta-q}^\beta \frac{(x-\beta)^2}{2}f''(x)\, d{x}\\ \end{align*}\]

Then since \(\alpha+q=\beta-q\) we can combine the integrals on the left-hand side and eliminate some terms from the right-hand side:

\begin{align*} \int_\alpha^\beta f(x)\, d{x} &= 2q f(\alpha+q) + \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2}f''(x)\, d{x} + \int_{\beta-q}^\beta \frac{(x-\beta)^2}{2}f''(x)\, d{x} \end{align*}

Rearrange this expression a little and take absolute values

\begin{align*} \left| \int_\alpha^\beta f(x)\, d{x} - 2q f(\alpha+q) \right| &\leq \left| \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2}f''(x)\, d{x} \right| + \left|\int_{\beta-q}^\beta \frac{(x-\beta)^2}{2}f''(x)\, d{x}\right| \end{align*}

where we have also made use of the triangle inequality^{ 14}. By assumption \(|f''(x)| \leq M\) on the interval \(\alpha \leq x \leq \beta\text{,}\) so

\begin{align*} \left| \int_\alpha^\beta f(x)\, d{x} - 2q f(\alpha+q) \right| & \leq M \int_\alpha^{\alpha+q} \frac{(x-\alpha)^2}{2} \, d{x} + M \int_{\beta-q}^\beta \frac{(x-\beta)^2}{2} \, d{x}\\ &= \frac{Mq^3}{3} = \frac{M(\beta-\alpha)^3}{24} \end{align*}

where we have used \(q = \frac{\beta-\alpha}{2}\) in the last step.

Thus on any interval \(x_i \leq x \leq x_{i+1}=x_i+\Delta x\)

\begin{align*} \left| \int_{x_i}^{x_{i+1}} f(x)\, d{x} - \Delta x f\left(\frac{x_i+x_{i+1}}{2}\right) \right| & \leq \frac{M}{24} ( \Delta x)^3 \end{align*}

Putting everything together we see that the error using the midpoint rule is bounded by

\begin{gather*} \left| \int_a^b f(x) \, d{x} - \left[ f(\bar x_1)+f(\bar x_2)+\cdots +f(\bar x_n) \right]\Delta x \right|\\ \leq \left| \int_{x_0}^{x_1} f(x)\, d{x} - \Delta x f(\bar x_1) \right| +\cdots+ \left| \int_{x_{n-1}}^{x_n} f(x)\, d{x} - \Delta x f(\bar x_n) \right|\\ \leq n \times \frac{M}{24} ( \Delta x)^3 = n \times \frac{M}{24} \left( \frac{b-a}{n}\right)^3 = \frac{M(b-a)^3}{24 n^2} \end{gather*}

as required.

A very similar analysis shows that, as was stated in Theorem 1.11.13 above,

- the total error introduced by the trapezoidal rule is bounded by \(\displaystyle \frac{M}{12}\frac{(b-a)^3}{n^2}\text{,}\)
- the total error introduced by Simpson's rule is bounded by \(\displaystyle \frac{M}{180}\frac{(b-a)^5}{n^4}\)

## Exercises

Recall that we are using \(\log x\) to denote the logarithm of \(x\) with base \(e\text{.}\) In other courses it is often denoted \(\ln x\text{.}\)

Stage 1

Suppose we approximate an object to have volume \(1.5 \mathrm{m}^3\text{,}\) when its exact volume is \(1.387 \mathrm{m}^3\text{.}\) Give the relative error, absolute error, and percent error of our approximation.

Consider approximating \(\displaystyle\int_2^{10} f(x) \, d{x}\text{,}\) where \(f(x)\) is the function in the graph below.

- Draw the rectangles associated with the midpoint rule approximation and \(n=4\text{.}\)
- Draw the trapezoids associated with the trapezoidal rule approximation and \(n=4\text{.}\)

You don't have to give an approximation.

Let \(f(x) = -\dfrac{1}{12}x^4+\dfrac{7}{6}x^3-3x^2\text{.}\)

- Find a reasonable value \(M\) such that \(|f''(x)| \leq M\) for all \(1 \leq x \leq 6\text{.}\)
- Find a reasonable value \(L\) such that \(|f^{(4)}(x)| \leq L\) for all \(1 \leq x \leq 6\text{.}\)

Let \(f(x) = x\sin x+2\cos x\text{.}\) Find a reasonable value \(M\) such that \(|f''(x)| \leq M\) for all \(-3 \leq x \leq 2\text{.}\)

Consider the quantity \(A=\displaystyle\int_{-\pi}^{\pi} \cos x \, d{x}\text{.}\)

- Find the upper bound on the error using Simpson's rule with \(n=4\) to approximate \(A\) using Theorem 1.11.13 in the text.
- Find the Simpson's rule approximation of \(A\) using \(n=4\text{.}\)
- What is the (actual) absolute error in the Simpson's rule approximation of \(A\) with \(n=4\text{?}\)

Give a function \(f(x)\) such that:

- \(f''(x) \leq 3\) for every \(x\) in \([0,1]\text{,}\) and
- the error using the trapezoidal rule approximating \(\displaystyle\int_0^1 f(x) \, d{x}\) with \(n=2\) intervals is exactly \(\dfrac{1}{16}\text{.}\)

Suppose my mother is under 100 years old, and I am under 200 years old. ^{ 15 }We're going somewhere with this. Who is older?

- True or False: for fixed positive constants \(M\text{,}\) \(n\text{,}\) \(a\text{,}\) and \(b\text{,}\) with \(b \gt a\text{,}\)
\[ \dfrac{M}{24}\dfrac{(b-a)^3}{n^2}\leq \dfrac{M}{12}\dfrac{(b-a)^3}{n^2} \nonumber \]

- True or False: for a function \(f(x)\) and fixed constants \(n\text{,}\) \(a\text{,}\) and \(b\text{,}\) with \(b \gt a\text{,}\) the \(n\)-interval midpoint approximation of \(\displaystyle\int_a^b f(x) \, d{x}\) is more accurate than the \(n\)-interval trapezoidal approximation.

Decide whether the following statement is true or false. If false, provide a counterexample. If true, provide a brief justification.

When \(f(x)\) is positive and concave up, any trapezoidal rule approximation for \(\displaystyle\int_{a}^{b} f(x) \,\, d{x}\) will be an upper estimate for \(\displaystyle\int_{a}^{b} f(x) \,\, d{x}\text{.}\)

Give a polynomial \(f(x)\) with the property that the Simpson's rule approximation of \(\displaystyle\int_a^b f(x) \, d{x}\) is exact for all \(a\text{,}\) \(b\text{,}\) and \(n\text{.}\)

Stage 2

Questions 11 and 12 ask you to approximate a given integral using the formulas in Equations 1.11.2, 1.11.6, and 1.11.9 in the text.

Questions 13 though 17 ask you to approximate a quantity based on observed data.

In Questions 18 through 24, we practice finding error bounds for our approximations.

Write out all three approximations of \(\displaystyle\int_0^{30} \frac{1}{x^3+1} \, d{x}\) with \(n=6\text{.}\) (That is: midpoint, trapezoidal, and Simpson's.) You do not need to simplify your answers.

Find the midpoint rule approximation to \(\displaystyle\int_0^\pi \sin x\, d{x}\) with \(n = 3\text{.}\)

The solid \(V\) is 40 cm high and the horizontal cross sections are circular disks. The table below gives the diameters of the cross sections in centimeters at 10 cm intervals. Use the trapezoidal rule to estimate the volume of \(V\text{.}\)

height | 0 | 10 | 20 | 30 | 40 |

diameter | 24 | 16 | 10 | 6 | 4 |

A \(6\) metre long cedar log has cross sections that are approximately circular. The diameters of the log, measured at one metre intervals, are given below:

metres from left end of log | 0 | 1 | 2 | 3 | 4 | 5 | 6 |

diameter in metres | 1.2 | 1 | 0.8 | 0.8 | 1 | 1 | 1.2 |

Use Simpson's Rule to estimate the volume of the log.

The circumference of an 8 metre high tree at different heights above the ground is given in the table below. Assume that all horizontal cross--sections of the tree are circular disks.

height (metres) | 0 | 2 | 4 | 6 | 8 |

circumference (metres) | 1.2 | 1.1 | 1.3 | 0.9 | 0.2 |

Use Simpson's rule to approximate the volume of the tree.

By measuring the areas enclosed by contours on a topographic map, a geologist determines the cross sectional areas \(A\) in \(\mathrm{m}^2\) of a \(60\) m high hill. The table below gives the cross sectional area \(A(h)\) at various heights \(h\text{.}\) The volume of the hill is \(V=\int_0^{60} A(h)\,\, d{h}\text{.}\)

\(h\) | 0 | 10 | 20 | 30 | 40 | 50 | 60 |

\(A\) | 10,200 | 9,200 | 8,000 | 7,100 | 4,500 | 2,400 | 100 |

- If the geologist uses the Trapezoidal Rule to estimate the volume of the hill, what will be their estimate, to the nearest 1,000\(\mathrm{m}^3\text{?}\)
- What will be the geologist's estimate of the volume of the hill if they use Simpson's Rule instead of the Trapezoidal Rule?

The graph below applies to both parts (a) and (b).

- Use the Trapezoidal Rule, with \(n = 4\text{,}\) to estimate the area under the graph between \(x = 2\) and \(x = 6\text{.}\) Simplify your answer completely.
- Use Simpson's Rule, with \(n = 4\text{,}\) to estimate the area under the graph between \(x = 2\) and \(x = 6\text{.}\)

The integral \(\displaystyle\int_{-1}^{1} \sin(x^2) \, \, d{x}\) is estimated using the Midpoint Rule with \(1000\) intervals. Show that the absolute error in this approximation is at most \(2\cdot 10^{-6}\text{.}\)

You may use the fact that when approximating \(\int_a^b f(x) \, \, d{x}\) with the Midpoint Rule using \(n\) points, the absolute value of the error is at most \(M(b-a)^3/24n^2\) when \(\left|f''(x)\right|\leq M\) for all \(x\in[a,b]\text{.}\)

The total error using the midpoint rule with \(n\) subintervals to approximate the integral of \(f(x)\) over \([a,b]\) is bounded by \(\dfrac{M (b-a)^3}{(24n^2)}\text{,}\) if \(|f''(x)| \le M\) for all \(a \le x \le b\text{.}\)

Using this bound, if the integral \(\displaystyle\int_{-2}^{1} 2x^4 \,\, d{x}\) is approximated using the midpoint rule with \(60\) subintervals, what is the largest possible error between the approximation \(M_{60}\) and the true value of the integral?

Both parts of this question concern the integral \(I = \displaystyle\int_{0}^{2} (x-3)^5\,\, d{x}\text{.}\)

- Write down the Simpson's Rule approximation to \(I\) with \(n=6\text{.}\) Leave your answer in calculator-ready form.
- Which method of approximating \(I\) results in a smaller error bound: the Midpoint Rule with \(n=100\) intervals, or Simpson's Rule with \(n=10\) intervals? You may use the formulas
\begin{gather*} |E_M| \le \frac{M(b-a)^3}{24n^2} \qquad\text{and}\qquad |E_S| \le \frac{L(b-a)^5}{180n^4}, \end{gather*}

where \(M\) is an upper bound for \(|f''(x)|\) and \(L\) is an upper bound for \(|f^{(4)}(x)|\text{,}\) and \(E_M\) and \(E_S\) are the absolute errors arising from the midpoint rule and Simpson's rule, respectively.

Find a bound for the error in approximating \(\displaystyle\int_1^5 \frac{1}{x}\,\, d{x}\) using Simpson's rule with \(n = 4\text{.}\) Do not write down the Simpson's rule approximation \(S_4\text{.}\)

In general the error in approximating \(\int_a^b f(x)\, d{x}\) using Simpson's rule with \(n\) steps is bounded by \(\dfrac{L(b-a)}{180}(\Delta x)^4\) where \(\Delta x=\dfrac{b-a}{n}\) and \(L\ge |f^{(4)}(x)|\) for all \(a\le x\le b\text{.}\)

Find a bound for the error in approximating

\[ \int_0^1 \big(e^{-2x}+3x^3\big)\,\, d{x} \nonumber \]

using Simpson's rule with \(n = 6\text{.}\) Do not write down the Simpson's rule approximation \(S_n\text{.}\)

In general, the error in approximating \(\int_a^b f(x)\, d{x}\) using Simpson's rule with \(n\) steps is bounded by \(\dfrac{ L(b-a)}{180}(\Delta x)^4\) where \(\Delta x=\dfrac{b-a}{n}\) and \(L\ge |f^{(4)}(x)|\) for all \(a\le x\le b\text{.}\)

Let \(I=\displaystyle\int_1^2 (1/x)\,\, d{x}\text{.}\)

- Write down the trapezoidal approximation \(T_4\) for \(I\text{.}\)
*You do not need to simplify your answer.* - Write down the Simpson's approximation \(S_4\) for \(I\text{.}\)
*You do not need to simplify your answer.* - Without computing \(I\text{,}\) find an upper bound for \(|I - S_4|\text{.}\) You may use the fact that if \(\big|f^{(4)}(x)\big|\le L\) on the interval \([a, b]\text{,}\) then the error in using \(S_n\) to approximate \(\int_a^b f(x)\,\, d{x}\) has absolute value less than or equal to \(L(b-a)^5/180n^4\text{.}\)

A function \(s(x)\) satisfies \(s(0)=1.00664\text{,}\) \(s(2)=1.00543\text{,}\) \(s(4)=1.00435\text{,}\) \(s(6)=1.00331\text{,}\) \(s(8)=1.00233\text{.}\) Also, it is known to satisfy \(\big|s^{(k)}(x)\big|\le \dfrac{k}{1000}\) for \(0\le x\le 8\) and all positive integers \(k\text{.}\)

- Find the best Trapezoidal Rule and Simpson's Rule approximations that you can for \(\displaystyle I=\int_0^8 s(x)\, d{x}\text{.}\)
- Determine the maximum possible sizes of errors in the approximations you gave in part (a). Recall that if a function \(f(x)\) satisfies \(\big|f^{(k)}(x)\big|\le K_k\) on \([a,b]\text{,}\) then
\[ \bigg|\int_a^b f(x)\, d{x} -T_n\bigg|\le \frac{K_2(b-a)^3}{12n^2} \quad\hbox{and}\quad \bigg|\int_a^b f(x)\, d{x} -S_n\bigg|\le \frac{K_4(b-a)^5}{180n^4} \nonumber \]

Consider the trapezoidal rule for making numerical approximations to \(\displaystyle\int_a^b f(x)\, d{x}\text{.}\) The error for the trapezoidal rule satisfies \(|E_T| \le \dfrac{ M(b - a)^3}{12n^2}\), where \(|f''(x)| \le M\) for \(a \le x \le b\text{.}\) If \(-2 \lt f''(x) \lt 0\) for \(1 \le x \le 4\text{,}\) find a value of \(n\) to guarantee the trapezoidal rule will give an approximation for \(\displaystyle\int_1^4 f(x)\, d{x}\) with absolute error, \(|E_T|\text{,}\) less than \(0.001\text{.}\)

Stage 3

A swimming pool has the shape shown in the figure below. The vertical cross--sections of the pool are semi--circular disks. The distances in feet across the pool are given in the figure at 2--foot intervals along the sixteen--foot length of the pool. Use Simpson's Rule to estimate the volume of the pool.

A piece of wire 1m long with radius 1mm is made in such a way that the density varies in its cross--section, but is radially symmetric (that is, the local density \(g(r)\) in \({\rm kg/m^3}\) depends only on the distance \(r\) in mm from the center of the wire). Take as given that the total mass \(W\) of the wire in kg is given by

\begin{gather*} W=2\pi 10^{-6}\int_0^1 rg(r)\,\, d{r} \end{gather*}

Data from the manufacturer is given below:

\(r\) | 0 | 1/4 | 1/2 | 3/4 | 1 |

\(g(r)\) | 8051 | 8100 | 8144 | 8170 | 8190 |

- Find the best Trapezoidal Rule approximation that you can for \(W\) based on the data in the table.
- Suppose that it is known that \(|g'(r)| \lt 200\) and \(|g''(r)| \lt 150\) for all values of \(r\text{.}\) Determine the maximum possible size of the error in the approximation you gave in part (a). Recall that if a function \(f(x)\) satisfies \(|f''(x)|\le M\) on \([a,b]\text{,}\) then
\begin{gather*} |I-T_n|\le\frac{M(b-a)^3}{12n^2} \end{gather*}

where \(I=\int_a^b f(x)\,\, d{x}\) and \(T_n\) is the Trapezoidal Rule approximation to \(I\) using \(n\) subintervals.

Simpson's rule can be used to approximate \(\log 2\text{,}\) since \(\displaystyle\log 2=\int_1^2\frac{1}{x}\,\, d{x}\text{.}\)

- Use Simpson's rule with 6 subintervals to approximate \(\log 2\text{.}\)
- How many subintervals are required in order to guarantee that the absolute error is less than \(0.00001\text{?}\)

Note that if \(E_n\) is the error using \(n\) subintervals, then \(|E_n|\le\dfrac{L(b-a)^5}{180n^4}\) where \(L\) is the maximum absolute value of the fourth derivative of the function being integrated and \(a\) and \(b\) are the end points of the interval.

Let \(I={\displaystyle\int_0^2}\cos(x^2)\, d{x}\) and let \(S_n\) be the Simpson's rule approximation to \(I\) using \(n\) subintervals.

- Estimate the maximum absolute error in using \(S_8\) to approximate \(I\text{.}\)
- How large should \(n\) be in order to ensure that \(|I-S_n|\le 0.0001\text{?}\)

Note: The graph of \(f''''(x)\text{,}\) where \(f(x)=\cos(x^2)\text{,}\) is shown below. The absolute error in the Simpson's rule approximation is bounded by \(\dfrac{L(b-a)^5}{180n^4}\) when \(|f''''(x)|\le L\) on the interval \([a,b]\text{.}\)

Define a function \(f(x)\) and an integral \(I\) by

\begin{gather*} f(x)=\int_0^{x^2}\sin(\sqrt{t})\,\, d{t},\qquad I=\int_0^1 f(t)\,\, d{t} \end{gather*}

Estimate how many subdivisions are needed to calculate \(I\) to five decimal places of accuracy using the trapezoidal rule.

Note that if \(E_n\) is the error using \(n\) subintervals, then \(|E_n|\le\dfrac{M(b-a)^3}{12n^2\vphantom{\frac{1}{2}}}\text{,}\) where \(M\) is the maximum absolute value of the second derivative of the function being integrated and \(a\) and \(b\) are the limits of integration.

Let \(f(x)\) be a function ^{ 18 }For example, \(f(x)=\frac{1}{6}x^3-\frac{1}{2}x^2+(1+x)\log|x+1|\) will do, but you don't need to know what \(f(x)\) is for this problem. with \(f''(x) = \dfrac{x^2}{x+1}\text{.}\)

- Show that \(|f''(x)| \leq 1\) whenever \(x\) is in the interval \([0,1]\text{.}\)
- Find the maximum value of \(|f''(x)|\) over the interval \([0,1]\text{.}\)
- Assuming \(M=1\text{,}\) how many intervals should you use to approximate \(\displaystyle\int_{0}^{1}f(x) \, d{x}\) to within \(10^{-5}\text{?}\)
- Using the value of \(M\) you found in (b), how many intervals should you use to approximate \(\displaystyle\int_0^1 f(x) \, d{x}\) to within \(10^{-5}\text{?}\)

Approximate the function \(\log x\) with a rational function by approximating the integral \(\displaystyle\int_1^{x\vphantom{\frac{1}{2}}} \frac{1}{t} \, d{t}\) using Simpson's rule. Your rational function \(f(x)\) should approximate \(\log x\) with an error of not more than 0.1 for any \(x\) in the interval \([1,3]\text{.}\)

Using an approximation of the area under the curve \(\dfrac{1}{x^2+1}\text{,}\) show that the constant \(\arctan2\) is in the interval \(\left[\dfrac{\pi}{4}+0.321,\, \dfrac{\pi}{4}+0.323\right]\text{.}\)

You may assume use without proof that \(\displaystyle\frac{d^{4}}{dx^{4}}\left\{\frac{1}{1+x^2}\right\} = \dfrac{24(5x^4-10x^2+1)}{(x^2+1)^5}\text{.}\) You may use a calculator, but only to add, subtract, multiply, and divide.

- We apologise for being a little sloppy here — but we just want to say that it can be very hard or even impossible to write some integrals as some finite sized expression involving polynomials, exponentials, logarithms and trigonometric functions. We don't want to get into a discussion of computability, though that is a very interesting topic.
- Thankfully it is very easy to write a program to apply the midpoint rule.
- This method is also called the “trapezoid rule” and “trapezium rule”.
- A trapezoid is a four sided polygon, like a rectangle. But, unlike a rectangle, the top and bottom of a trapezoid need not be parallel.
- Simpson's rule is named after the 18th century English mathematician Thomas Simpson, despite its use a century earlier by the German mathematician and astronomer Johannes Kepler. In many German texts the rule is often called Kepler's rule.
- There is an easy generalisation of Simpson's rule that uses cubics instead of parabolas. It is known as Simpson's second rule and Simpson's \(\frac38\) rule. While one can push this approach further (using quartics, quintics etc), it can sometimes lead to larger errors — the interested reader should look up Runge's phenomenon.
- Indeed, even beyond the “real world” of many applications in first year calculus texts, some of the methods we have described are used by actual people (such as ship builders, engineers and surveyors) to estimate areas and volumes of actual objects!
- There is a variant of this trick that works even when you don't know the answer to the integral ahead of time. Suppose that you suspect that the approximation satisfies \(M_n=A+K\tfrac{1}{n^k}\) where \(A\) is the exact value of the integral and suppose that you don't know the values of \(A\text{,}\) \(K\) and \(k\text{.}\) Then \(M_{n}-M_{2n} =K\tfrac{1}{n^k}-K\tfrac{1}{(2n)^k} =K\big(1-\tfrac{1}{2^k}\big)\tfrac{1}{n^k}\) so plotting \(y=\log(M_{n}-M_{2n})\) against \(x=\log n\) gives the straight line \(y=\log \big[K\big(1-\frac{1}{2^k}\big)\big] -kx\text{.}\)
- Now is a good time for a quick revision of logarithms — see “Whirlwind review of logarithms” in Section 2.7 of the CLP-1 text.
- Linear regression is not part of this course as its derivation requires some multivariable calculus. It is a very standard technique in statistics.
- This is our favourite running example of an integral that cannot be evaluated algebraically — we need to use numerical methods.
- The authors tested this empirically.
- We chose this interval so that we didn't have lots of subscripts floating around in the algebra.
- The triangle inequality says that for any real numbers \(x,y\) \(|x+y| \leq |x| + |y|.\)