# C.1: Richardson Extrapolation

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

There are many approximation procedures in which one first picks a step size $$h$$ and then generates an approximation $$A(h)$$ to some desired quantity $$\cA\text{.}$$ For example, $$\cA$$ might be the value of some integral $$\int_a^b f(x)\,\, d{x}\text{.}$$ For the trapezoidal rule with $$n$$ steps, $$\Delta x=\tfrac{b-a}{n}$$ plays the role of the step size. Often the order of the error generated by the procedure is known. This means

$\cA=A(h)+Kh^k+K_1h^{k+1}+K_2h^{k+2}+\ \cdots \tag{E1} \nonumber$

with $$k$$ being some known constant, called the order of the error, and $$K,\ K_1,\ K_2,\ \cdots$$ being some other (usually unknown) constants. If $$A(h)$$ is the approximation to $$\cA=\int_a^b f(x)\,\, d{x}$$ produced by the trapezoidal rule with $$\Delta x=h\text{,}$$ then $$k=2\text{.}$$ If Simpson's rule is used, $$k=4\text{.}$$

Let's first suppose that $$h$$ is small enough that the terms $$K'h^{k+1}+K''h^{k+2}+\ \cdots$$ in (E1) are small enough 1 Typically, we don't have access to, and don't care about, the exact error. We only care about its order of magnitude. So if $$h$$ is small enough that $$K_1h^{k+1}+K_2h^{k+2}+\ \cdots$$ is a factor of at least, for example, one hundred smaller than $$Kh^k\text{,}$$ then dropping $$K_1h^{k+1}+K_2h^{k+2}+\ \cdots$$ would not bother us at all. that dropping them has essentially no impact. This would give

$\cA=A(h)+Kh^k \tag{E2} \nonumber$

Imagine that we know $$k\text{,}$$ but that we do not know $$A$$ or $$K\text{,}$$ and think of (E2) as an equation that the unknowns $$\cA$$ and $$K$$ have to solve. It may look like we have one equation in the two unknowns $$K\text{,}$$ $$\cA\text{,}$$ but that is not the case. The reason is that (E2) is (essentially) true for all (sufficiently small) choices of $$h\text{.}$$ If we pick some $$h\text{,}$$ say $$h_1\text{,}$$ and use the algorithm to determine $$A(h_1)$$ then (E2), with $$h$$ replaced by $$h_1\text{,}$$ gives one equation in the two unknowns $$\cA$$ and $$K\text{,}$$ and if we then pick some different $$h\text{,}$$ say $$h_2\text{,}$$ and use the algorithm a second time to determine $$A(h_2)$$ then (E2), with $$h$$ replaced by $$h_2\text{,}$$ gives a second equation in the two unknowns $$\cA$$ and $$K\text{.}$$ The two equations will then determine both $$\cA$$ and $$K\text{.}$$

To be more concrete, suppose that we have picked some specific value of $$h\text{,}$$ and have chosen $$h_1=h$$ and $$h_2=\tfrac{h}{2}\text{,}$$ and that we have evaluated $$A(h)$$ and $$A(h/2)\text{.}$$ Then the two equations are

\begin{align*} \cA&=A(h)+Kh^k \tag{E3a}\\ \cA&=A(h/2)+K\big(\tfrac{h}{2}\big)^k \tag{E3b} \end{align*}

It is now easy to solve for both $$K$$ and $$\cA\text{.}$$ To get $$K\text{,}$$ just subtract (E3b) from (E3a).

To get $$\cA$$ multiply (E3b) by $$2^k$$ and then subtract (E3a).

The generation of a “new improved” approximation for $$\cA$$ from two $$A(h)$$'s with different values of $$h$$ is called Richardson 2 Richardson extrapolation was introduced by the Englishman Lewis Fry Richardson (1881--1953) in 1911. Extrapolation. Here is a summary

##### Equation C.1.1 Richardson extrapolation

Let $$A(h)$$ be a step size $$h$$ approximation to $$\cA\text{.}$$ If

$\cA=A(h)+K h^k \nonumber$

then

$K=\frac{A(h/2)-A(h)}{[1-2^{-k}]h^k}\qquad \cA=\frac{2^kA(h/2)-A(h)}{2^k-1} \nonumber$

This works very well since, by computing $$A(h)$$ for two different $$h$$'s, we can remove the biggest error term in (E1), and so get a much more precise approximation to $$\cA$$ for little additional work.

##### Example C.1.2 Richardson extrapolation with the trapezoidal rule

Applying the trapezoidal rule (1.11.6) to the integral $$\cA=\int_0^1\frac{4}{1+x^2}\, d{x}$$ with step sizes $$\frac{1}{8}$$ and $$\frac{1}{16}$$ (i.e. with $$n=8$$ and $$n=16$$) gives, with $$h=\frac{1}{8}\text{,}$$

$A(h)=3.1389884945\qquad A(h/2)=3.1409416120 \nonumber$

So (E4b), with $$k=2\text{,}$$ gives us the “new improved” approximation

$\frac{2^2\times 3.1409416120 -3.1389884945}{2^2-1}=3.1415926512 \nonumber$

We saw in Example 1.11.3 that $$\int_0^1\frac{4}{1+x^2}\, d{x}=\pi\text{,}$$ so this new approximation really is “improved”:

• $$A(1/8)$$ agrees with $$\pi$$ to two decimal places,
• $$A(1/16)$$ agrees with $$\pi$$ to three decimal places and
• the new approximation agrees with $$\pi$$ to eight decimal places.

Beware that (E3b), namely $$\cA=A(h/2)+K\big(\tfrac{h}{2}\big)^k\text{,}$$ is saying that $$K\big(\frac{h}{2}\big)^k$$ is (approximately) the error in $$A(h/2)\text{,}$$ not the error in $$\cA\text{.}$$ You cannot get an “even more improved” approximation by using (E4a) to compute $$K$$ and then adding $$K\big(\frac{h}{2}\big)^k$$ to the “new improved” $$\cA$$ of (E4b) — doing so just gives $$\cA+K\big(\tfrac{h}{2}\big)^k\text{,}$$ not a more accurate $$\cA\text{.}$$

##### Example C.1.3 Example 1.11.16 revisited

Suppose again that we wish to use Simpson's rule (1.11.9) to evaluate $$\int_0^1 e^{-x^2}\,\, d{x}$$ to within an accuracy of $$10^{-6}\text{,}$$ but that we do not need the degree of certainty provided by Example 1.11.16. Observe that we need (approximately) that $$|K|h^4 \lt 10^{-6}\text{,}$$ so if we can estimate $$K$$ (using our Richardson trick) then we can estimate the required $$h\text{.}$$ A commonly used strategy, based on this observation, is to

• first apply Simpson's rule twice with some relatively small number of steps and
• then use (E4a), with $$k=4\text{,}$$ to estimate $$K$$ and
• then use the condition $$|K| h^k\le 10^{-6}$$ to determine, approximately, the number of steps required
• and finally apply Simpson's rule with the number of steps just determined.

Let's implement this strategy. First we estimate $$K$$ by applying Simpson's rule with step sizes $$\tfrac{1}{4}$$ and $$\tfrac{1}{8}\text{.}$$ Writing $$\tfrac{1}{4}=h'\text{,}$$ we get

$A(h')=0.74685538 % 0.746855379790987 \qquad A(h'/2)=0.74682612 %0.746826120527467 \nonumber$

so that (E4a), with $$k=4$$ and $$h$$ replaced by $$h'\text{,}$$ yields

$K=\frac{0.74682612 - 0.74685538}{[1-2^{-4}](1/4)^4} %=-\frac{0.000031211}{(1/4)^4} =-7.990\times 10^{-3} \nonumber$

We want to use a step size $$h$$ obeying

$|K|h^4\le 10^{-6} \iff 7.990\times 10^{-3} h^4\le 10^{-6} \iff h \le\root{4}\of{\frac{1}{7990}} =\frac{1}{9.45} \nonumber$

like, for example, $$h=\tfrac{1}{10}\text{.}$$ Applying Simpson's rule with $$h=\tfrac{1}{10}$$ gives

$A(1/10) = 0.74682495 \nonumber$

The exact answer, to eight decimal places, is $$0.74682413$$ so the error in $$A(1/10)$$ is indeed just under $$10^{-6}\text{.}$$

Suppose now that we change our minds. We want an accuracy of $$10^{-12}\text{,}$$ rather than $$10^{-6}\text{.}$$ We have already estimated $$K\text{.}$$ So now we want to use a step size $$h$$ obeying

\begin{align*} |K|h^4\le 10^{-12} &\iff 7.99\times 10^{-3} h^4\le 10^{-12}\\ &\iff h \le\root{4}\of{\frac{1}{7.99\times 10^9}} =\frac{1}{299.0} \end{align*}

like, for example, $$h=\tfrac{1}{300}\text{.}$$ Applying Simpson's rule with $$h=\tfrac{1}{300}$$ gives, to fourteen decimal places,

$A(1/300) = 0.74682413281344 \nonumber$

The exact answer, to fourteen decimal places, is $$0.74682413281243$$ so the error in $$A(1/300)$$ is indeed just over $$10^{-12}\text{.}$$

This page titled C.1: Richardson Extrapolation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Joel Feldman, Andrew Rechnitzer and Elyse Yeager via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.