Skip to main content
Mathematics LibreTexts

C.3: Adaptive Quadrature

  • Page ID
    91839
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Richardson extrapolation is also used to choose the step size so as to achieve some desired degree of accuracy. “Adaptive quadrature” refers to a family of algorithms that use small step sizes in the part of the domain of integration where it is hard to get good accuracy and large step sizes in the part of the domain of integration where it is easy to get good accuracy.

    We'll illustrate the idea using Simpson's rule applied to the integral \(\int_a^b f(x)\ \, d{x}\text{,}\) and assuming that we want the error to be no more than (approximately) some fixed constant \(\varepsilon\text{.}\) For example, \(\varepsilon\) could be \(10^{-6}\text{.}\) Denote by \(S(a',b'\,;\,h')\text{,}\) the answer given when Simpson's rule is applied to the integral \(\int_{a'}^{b'} f(x)\ \, d{x}\) with step size \(h'\text{.}\)

    • Step 1. We start by applying Simpson's rule, combined with Richardson extrapolation so as to get an error estimate, with the largest possible step size \(h\text{.}\) Namely, set \(h=\tfrac{b-a}{2}\) and compute

      \[ f(a)\quad f\big(a+\tfrac{h}{2}\big)\quad f(a+h)=f\big(\tfrac{a+b}{2}\big)\quad f\big(a+\tfrac{3h}{2}\big)\quad f(a+2h)=f(b) \nonumber \]

      Then

      \[ S\big(a,b\,;h\big) =\tfrac{h}{3}\big\{f(a)+4 f\big(a+h\big)+f(b)\big\} \nonumber \]

      and

      \begin{align*} S\big(a,b\,;\tfrac{h}{2}\big) &=\tfrac{h}{6}\big\{f(a)+4 f\big(a+\tfrac{h}{2}\big) +2f\big(\tfrac{a+b}{2}\big) +4 f\big(a+\tfrac{3h}{2}\big) +f(b)\big\}\\ &=S\big(a,\tfrac{a+b}{2}\,;\tfrac{h}{2}\big) +S\big(\tfrac{a+b}{2},b\,;\tfrac{h}{2}\big) \end{align*}

      with

      \begin{align*} S\big(a,\tfrac{a+b}{2}\,;\,\tfrac{h}{2}\big) &=\tfrac{h}{6}\big\{f(a)+4 f\big(a+\tfrac{h}{2}\big) +f\big(\tfrac{a+b}{2}\big)\big\} \\ S\big(\tfrac{a+b}{2},b\,;\,\tfrac{h}{2}\big) &=\tfrac{h}{6}\big\{f\big(\tfrac{a+b}{2}\big)+4 f\big(a+\tfrac{3h}{2}\big) +f(b)\big\} \end{align*}

      Using the Richardson extrapolation formula (E4a) with \(k=4\) gives that the error in \(S\big(a,b\,;\,\tfrac{h}{2}\big)\) is (approximately)

      \[ \big|K\big(\tfrac{h}{2}\big)^4\big|=\tfrac{1}{15}\Big| S\big(a,b\,;\,\tfrac{h}{2}\big) -S\big(a,b\,;\,h\big)\Big| \tag{E7} \nonumber \]

      If this is smaller than \(\varepsilon\text{,}\) we have (approximately) the desired accuracy and stop 1 It is very common to build in a bit of a safety margin and require that, for example, \(\big|K\big(\tfrac{h}{2}\big)^4\big|\) be smaller than \(\tfrac{\varepsilon}{2}\) rather than \(\varepsilon\text{.}\).
    • Step 2. If (E7) is larger than \(\varepsilon\text{,}\) we divide the original integral \(I=\int_a^b f(x)\,\, d{x}\) into two “half-sized” integrals, \(I_1=\int_a^{\tfrac{a+b}{2}} f(x)\,\, d{x}\) and \(I_2=\int_{\tfrac{a+b}{2}}^b f(x)\,\, d{x}\) and repeat the procedure of Step 1 on each of them, but with \(h\) replaced by \(\tfrac{h}{2}\) and \(\varepsilon\) replaced by \(\tfrac{\varepsilon}{2}\) — if we can find an approximation, \(\tilde I_1\text{,}\) to \(I_1\) with an error less than \(\tfrac{\varepsilon}{2}\) and an approximation, \(\tilde I_2\text{,}\) to \(I_2\) with an error less than \(\tfrac{\varepsilon}{2}\text{,}\) then \(\tilde I_1+\tilde I_2\) approximates \(I\) with an error less than \(\varepsilon\text{.}\) Here is more detail.
      • If the error in the approximation \(\tilde I_1\) to \(I_1\) and the error in the approximation \(\tilde I_2\) to \(I_2\) are both acceptable, then we use \(\tilde I_1\) as our final approximation to \(I_1\) and we use \(\tilde I_2\) as our final approximation to \(I_2\text{.}\)
      • If the error in the approximation \(\tilde I_1\) to \(I_1\) is acceptable but the error in the approximation \(\tilde I_2\) to \(I_2\) is not acceptable, then we use \(\tilde I_1\) as our final approximation to \(I_1\) but we subdivide the integral \(I_2\text{.}\)
      • If the error in the approximation \(\tilde I_1\) to \(I_1\) is not acceptable but the error in the approximation \(\tilde I_2\) to \(I_2\) is acceptable, then we use \(\tilde I_2\) as our final approximation to \(I_2\) but we subdivide the integral \(I_1\text{.}\)
      • If the error in the approximation \(\tilde I_1\) to \(I_1\) and the error in the approximation \(\tilde I_2\) to \(I_2\) are both not acceptable, then we subdivide both of the integrals \(I_1\) and \(I_2\text{.}\)

      So we adapt the step size as we go.

    • Steps 3, 4, 5, \(\cdots\) Repeat as required.
    Example C.3.1 Adaptive quadrature

    Let's apply adaptive quadrature using Simpson's rule as above with the goal of computing \(\int _0^1\sqrt{x}\ \, d{x}\) with an error of at most \(\varepsilon=0.0005=5\times 10^{-4}\text{.}\) Observe that \(\frac{d}{dx}\sqrt{x}=\frac{1}{2\sqrt{x}}\) blows up as \(x\) tends to zero. The integrand changes very quickly when \(x\) is small. So we will probably need to make the step size small near the limit of integration \(x=0\text{.}\)

    • Step 1 — the interval \([0,1]\text{.}\) (The notation \([0,1]\) stands for the interval \(0\le x\le 1\text{.}\))

      \begin{align*} S(0,1\,;\tfrac{1}{2})&= 0.63807119 \\ S(0,\tfrac{1}{2}\,;\tfrac{1}{4})&= 0.22559223 \\ S(\tfrac{1}{2},1\,;\tfrac{1}{4})&= 0.43093403 \\ \text{error}&=\tfrac{1}{15}\left|S(0,\tfrac{1}{2}\,;\tfrac{1}{4}) +S(\tfrac{1}{2},1\,;\tfrac{1}{4}) -S(0,1\,;\tfrac{1}{2})\right| =0.0012 \\ &\gt\varepsilon =0.0005 \end{align*}

      This is unacceptably large, so we subdivide the interval \([0,1]\) into the two halves \(\big[0,\tfrac{1}{2}\big]\) and \(\big[\tfrac{1}{2},1\big]\) and apply the procedure separately to each half.
    • Step 2a — the interval \([0,\half]\text{.}\)

      \begin{align*} S(0,\tfrac{1}{2}\,;\tfrac{1}{4})&= 0.22559223 \\ S(0,\tfrac{1}{4}\,;\tfrac{1}{8})&= 0.07975890 \\ S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{8})&= 0.15235819 \\ \text{error}&=\tfrac{1}{15}\left|S(0,\tfrac{1}{4}\,;\tfrac{1}{8}) +S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{8}) -S(0,\tfrac{1}{2}\,;\tfrac{1}{4})\right| = 0.00043 \\ & \gt\tfrac{\varepsilon}{2} = 0.00025 \end{align*}

      This error is unacceptably large.
    • Step 2b — the interval \([\tfrac{1}{2},1]\text{.}\)

      \begin{align*} S(\tfrac{1}{2},1\,;\tfrac{1}{4})&= 0.43093403 \\ S(\tfrac{1}{2},\tfrac{3}{4}\,;\tfrac{1}{8})&= 0.19730874 \\ S(\tfrac{3}{4},1\,;\tfrac{1}{8})&= 0.23365345 \\ \text{error}&=\tfrac{1}{15}\left|S(\tfrac{1}{2},\tfrac{3}{4}\,;\tfrac{1}{8}) +S(\tfrac{3}{4},1\,;\tfrac{1}{8}) -S(\tfrac{1}{2},1\,;\tfrac{1}{4})\right| = 0.0000019 \\ &\lt\tfrac{\varepsilon}{2} = 0.00025 \end{align*}

      This error is acceptable.
    • Step 2 resumé. The error for the interval \([\tfrac{1}{2},1]\) is small enough, so we accept

      \[ S(\tfrac{1}{2},1\,;\tfrac{1}{8}) = S(\tfrac{1}{2},\tfrac{3}{4}\,;\tfrac{1}{8}) + S(\tfrac{3}{4},1\,;\tfrac{1}{8}) = 0.43096219 \nonumber \]

      as the approximate value of \(\int_{1/2}^1\sqrt{x}\,\, d{x}\text{.}\)

      The error for the interval \([0,\tfrac{1}{2}]\) is unacceptably large, so we subdivide the interval \([0,\tfrac{1}{2}]\) into the two halves \([0,\tfrac{1}{4}]\) and \([\tfrac{1}{4},\tfrac{1}{2}]\) and apply the procedure separately to each half.

    • Step 3a — the interval \([0,\tfrac{1}{4}]\text{.}\)

      \begin{align*} S(0,\tfrac{1}{4}\,;\tfrac{1}{8})&= 0.07975890 \\ S(0,\tfrac{1}{8}\,;\tfrac{1}{16})&= 0.02819903 \\ S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{16})&= 0.05386675 \\ \text{error}&=\tfrac{1}{15}\left|S(0,\tfrac{1}{8}\,;\tfrac{1}{16}) +S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{16}) -S(0,\tfrac{1}{4}\,;\tfrac{1}{8})\right|\\ &= 0.000153792 > \tfrac{\varepsilon}{4} = 0.000125 \end{align*}

      This error is unacceptably large.
    • Step 3b — the interval \([\tfrac{1}{4},\tfrac{1}{2}]\text{.}\)

      \begin{align*} S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{8})&= 0.15235819 \\ S(\tfrac{1}{4},\tfrac{3}{8}\,;\tfrac{1}{16})&= 0.06975918 \\ S(\tfrac{3}{8},\tfrac{1}{2}\,;\tfrac{1}{16})&= 0.08260897 \\ \text{error}&=\tfrac{1}{15}\left|S(\tfrac{1}{4},\tfrac{3}{8}\,;\tfrac{1}{16}) +S(\tfrac{3}{8},\tfrac{1}{2}\,;\tfrac{1}{16}) -S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{8})\right|\\ & = 0.00000066 \lt\tfrac{\varepsilon}{4} = 0.000125 \end{align*}

      This error is acceptable.
    • Step 3 resumé. The error for the interval \([\tfrac{1}{4},\tfrac{1}{2}]\) is small enough, so we accept

      \[ S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{16}) = S(\tfrac{1}{4},\tfrac{3}{8}\,;\tfrac{1}{16}) + S(\tfrac{3}{8},\tfrac{1}{2}\,;\tfrac{1}{16}) = 0.15236814 \nonumber \]

      as the approximate value of \(\int_{1/4}^{1/2}\sqrt{x}\,\, d{x}\text{.}\)

      The error for the interval \([0,\tfrac{1}{4}]\) is unacceptably large, so we subdivide the interval \([0,\tfrac{1}{4}]\) into the two halves \([0,\tfrac{1}{8}]\) and \([\tfrac{1}{8},\tfrac{1}{4}]\) and apply the procedure separately to each half.

    • Step 4a — the interval \([0,\tfrac{1}{8}]\text{.}\)

      \begin{align*} S(0,\tfrac{1}{8}\,;\tfrac{1}{16})&= 0.02819903 \\ S(0,\tfrac{1}{16}\,;\tfrac{1}{32})&= 0.00996986 \\ S(\tfrac{1}{16},\tfrac{1}{8}\,;\tfrac{1}{32})&= 0.01904477 \\ \text{error}&=\tfrac{1}{15}\left|S(0,\tfrac{1}{16}\,;\tfrac{1}{32}) +S(\tfrac{1}{16},\tfrac{1}{8}\,;\tfrac{1}{32}) -S(0,\tfrac{1}{8}\,;\tfrac{1}{16})\right|\\ & = 0.000054 \lt \tfrac{\varepsilon}{8} = 0.0000625 \end{align*}

      This error is acceptable.
    • Step 4b — the interval \([\tfrac{1}{8},\tfrac{1}{4}]\text{.}\)

      \begin{align*} S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{16})&= 0.05386675 \\ S(\tfrac{1}{8},\tfrac{3}{16}\,;\tfrac{1}{32})&= 0.02466359 \\ S(\tfrac{3}{16},\tfrac{1}{4}\,;\tfrac{1}{32})&= 0.02920668 \\ \text{error}&=\tfrac{1}{15}\left|S(\tfrac{1}{8},\tfrac{3}{16}\,;\tfrac{1}{32}) +S(\tfrac{3}{6},\tfrac{1}{4}\,;\tfrac{1}{32}) -S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{16})\right|\\ & = 0.00000024 \lt \tfrac{\varepsilon}{8} = 0.0000625 \end{align*}

      This error is acceptable.
    • Step 4 resumé. The error for the interval \([0,\tfrac{1}{8}]\) is small enough, so we accept

      \[ S(0,\tfrac{1}{8}\,;\tfrac{1}{32}) = S(0,\tfrac{1}{16}\,;\tfrac{1}{32}) + S(\tfrac{1}{16},\tfrac{1}{8}\,;\tfrac{1}{32}) = 0.02901464 \nonumber \]

      as the approximate value of \(\int_0^{1/8}\sqrt{x}\,\, d{x}\text{.}\)

      The error for the interval \([\tfrac{1}{8},\tfrac{1}{4}]\) is small enough, so we accept

      \[ S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{32}) = S(\tfrac{1}{8},\tfrac{3}{16}\,;\tfrac{1}{32}) + S(\tfrac{3}{16},\tfrac{1}{4}\,;\tfrac{1}{32}) = 0.05387027 \nonumber \]

      as the approximate value of \(\int_{1/8}^{1/4}\sqrt{x}\,\, d{x}\text{.}\)

    • Conclusion. The approximate value for \(\int_0^1\sqrt{x}\ \, d{x}\) is

      \begin{align*} & S(0,\tfrac{1}{8}\,;\tfrac{1}{32}) +S(\tfrac{1}{8},\tfrac{1}{4}\,;\tfrac{1}{32}) +S(\tfrac{1}{4},\tfrac{1}{2}\,;\tfrac{1}{16}) +S(\tfrac{1}{2},1\,;\tfrac{1}{8})\\ &\hskip1in =0.66621525 \tag{E8} \end{align*}

    Of course the exact value of \(\int_0^1\sqrt{x}\ \, d{x}=\tfrac{2}{3}\text{,}\) so the actual error in our approximation is

    \[ \tfrac{2}{3}-0.66621525 = 0.00045 \lt \varepsilon = 0.0005 \nonumber \]

    Here is what Simpson's rule gives us when applied with some fixed step sizes.

    \begin{align*} S(0,1\,;\tfrac{1}{8}) &= 0.66307928 \\ S(0,1\,;\tfrac{1}{16}) &= 0.66539819 \\ S(0,1\,;\tfrac{1}{32}) &= 0.66621818 \\ S(0,1\,;\tfrac{1}{64}) &= 0.66650810 \end{align*}

    So to get an error comparable to that in (E8) from Simpson's rule with a fixed step size, we need to use \(h=\frac{1}{32}\text{.}\) In (E8) the step size \(h=\frac{1}{32}\) was just used on the subinterval \(\big[0,\frac{1}{4}\big]\text{.}\)


    This page titled C.3: Adaptive Quadrature is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Joel Feldman, Andrew Rechnitzer and Elyse Yeager via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?