Skip to main content
Mathematics LibreTexts

2: Root Finding

  • Page ID
    93741
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The problem is to solve \(f(x)=0\) for \(x\) when an explicit analytical solution is impossible. All methods compute a sequence \(x_{0}, x_{1}, x_{2}, \ldots\) that converges to the root \(x=r\) satisfying \(f(r)=0\).

    Bisection Method

    The bisection method is conceptually the simplest method and almost always works. However, it is also the slowest method, and most of the time should be avoided.

    We first choose \(x_{0}\) and \(x_{1}\) such that \(x_{0}<r<x_{1}\). We say that \(x_{0}\) and \(x_{1}\) bracket the root. With \(f(r)=0\), we want \(f\left(x_{0}\right)\) and \(f\left(x_{1}\right)\) to be of opposite sign, so that \(f\left(x_{0}\right) f\left(x_{1}\right)<0\). We then assign \(x_{2}\) to be the midpoint of \(x_{0}\) and \(x_{1}\), that is \(x_{2}=\) \(\left(x_{1}+x_{0}\right) / 2\), or

    \[x_{2}=x_{1}-\frac{x_{1}-x_{0}}{2} . \nonumber \]

    The sign of \(f\left(x_{2}\right)\) is then determined, and the value of \(x_{3}\) is chosen as either the midpoint of \(x_{2}\) and \(x_{0}\) or as the midpoint of \(x_{2}\) and \(x_{1}\), depending on whether \(x_{2}\) and \(x_{0}\) bracket the root, or \(x_{2}\) and \(x_{1}\) bracket the root. The root, therefore, stays bracketed at all times. The algorithm proceeds in this fashion and is typically stopped when the absolute value of the increment to the last best approximation to the root (above, given by \(\left.\left|x_{1}-x_{0}\right| / 2\right)\) is smaller than some required precision.

    Estimate \(\sqrt{2}=1.41421 \ldots\) using \(x_{0}=1\) and \(x_{1}=2\)

    Now \(\sqrt{2}\) is the zero of the function \(f(x)=x^{2}-2\). We iterate with \(x_{0}=1\) and \(x_{1}=2\). We have

    \[x_{2}=2-\frac{2-1}{2}=\frac{3}{2}=1.5 \text {. } \nonumber \]

    Now, \(f\left(x_{2}\right)=9 / 4-2=1 / 4>0\) so that \(x_{2}\) and \(x_{0}\) bracket the root. Therefore,

    \[x_{3}=\frac{3}{2}-\frac{\frac{3}{2}-1}{2}=\frac{5}{4}=1.25 \text {. } \nonumber \]

    Now, \(f\left(x_{3}\right)=25 / 16-2=-7 / 16<0\) so that \(x_{3}\) and \(x_{2}\) bracket the root. Therefore,

    \[x_{4}=\frac{5}{4}-\frac{\frac{5}{4}-\frac{3}{2}}{2}=\frac{11}{8}=1.375, \nonumber \]

    and so on.

    Newton’s Method

    This is the fastest method, but requires analytical computation of the derivative of \(f(x)\). If the derivative is known, then this method should be used, although convergence to the desired root is not guaranteed.

    We can derive Newton’s Method from a Taylor series expansion. We again want to construct a sequence \(x_{0}, x_{1}, x_{2}, \ldots\) that converges to the root \(x=r\). Consider the \(x_{n+1}\) member of this sequence, and Taylor series expand \(f\left(x_{n+1}\right)\) about the point \(x_{n}\). We have

    \[f\left(x_{n+1}\right)=f\left(x_{n}\right)+\left(x_{n+1}-x_{n}\right) f^{\prime}\left(x_{n}\right)+\ldots . \nonumber \]

    To determine \(x_{n+1}\), we drop the higher-order terms in the Taylor series, and assume \(f\left(x_{n+1}\right)=0\). Solving for \(x_{n+1}\), we have

    \[x_{n+1}=x_{n}-\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)} . \nonumber \]

    Starting Newton’s Method requires a guess for \(x_{0}\), to be chosen as close as possible to the root \(x=r\).

    Estimate \(\sqrt{2}\) using \(x_{0}=1\)

    Again, we solve \(f(x)=0\), where \(f(x)=x^{2}-2\). To implement Newton’s Method, we use \(f^{\prime}(x)=2 x\). Therefore, Newton’s Method is the iteration

    \[x_{n+1}=x_{n}-\frac{x_{n}^{2}-2}{2 x_{n}} . \nonumber \]

    With our initial guess \(x_{0}=1\), we have

    \[\begin{aligned} &x_{1}=1-\frac{-1}{2}=\frac{3}{2}=1.5, \\ &x_{2}=\frac{3}{2}-\frac{\frac{9}{4}-2}{3}=\frac{17}{12}=1.416667, \\ &x_{3}=\frac{17}{12}-\frac{\frac{17^{2}}{12^{2}}-2}{\frac{17}{6}}=\frac{577}{408}=1.41426 . \end{aligned} \nonumber \]

    Secant Method

    The Secant Method is second fastest to Newton’s Method, and is most often used when it is not possible to take an analytical derivative of the function \(f(x)\). We write in place of \(f^{\prime}\left(x_{n}\right)\),

    \[f^{\prime}\left(x_{n}\right) \approx \frac{f\left(x_{n}\right)-f\left(x_{n-1}\right)}{x_{n}-x_{n-1}} . \nonumber \]

    Starting the Secant Method requires a guess for both \(x_{0}\) and \(x_{1}\). These values need not bracket the root, but convergence is not guaranteed.

    Estimate \(\sqrt{2}\) using \(x_{0}=1\) and \(x_{1}=2\)

    Again, we solve \(f(x)=0\), where \(f(x)=x^{2}-2\). The Secant Method iterates

    \[\begin{aligned} x_{n+1} &=x_{n}-\frac{\left(x_{n}^{2}-2\right)\left(x_{n}-x_{n-1}\right)}{x_{n}^{2}-x_{n-1}^{2}} \\ &=x_{n}-\frac{x_{n}^{2}-2}{x_{n}+x_{n-1}} . \end{aligned} \nonumber \]

    With \(x_{0}=1\) and \(x_{1}=2\), we have

    \[\begin{aligned} &x_{2}=2-\frac{4-2}{3}=\frac{4}{3}=1.33333 \\ &x_{3}=\frac{4}{3}-\frac{\frac{16}{9}-2}{\frac{4}{3}+2}=\frac{21}{15}=1.4 \\ &x_{4}=\frac{21}{15}-\frac{\left(\frac{21}{15}\right)^{2}-2}{\frac{21}{15}+\frac{4}{3}}=\frac{174}{123}=1.41463 . \end{aligned} \nonumber \]

    A fractal from Newton’s Method

    Consider the complex roots of the equation \(f(z)=0\), where

    \[f(z)=z^{3}-1 . \nonumber \]

    These roots are the three cubic roots of unity. With

    \[e^{i 2 \pi n}=1, \quad n=0,1,2, \ldots \nonumber \]

    the three unique cubic roots of unity are given by

    \[1, e^{i 2 \pi / 3}, e^{i 4 \pi / 3} . \nonumber \]

    With

    \[e^{i \theta}=\cos \theta+i \sin \theta, \nonumber \]

    and \(\cos (2 \pi / 3)=-1 / 2, \sin (2 \pi / 3)=\sqrt{3} / 2\), the three cubic roots of unity are

    \[r_{1}=1, \quad r_{2}=-\frac{1}{2}+\frac{\sqrt{3}}{2} i, \quad r_{3}=-\frac{1}{2}-\frac{\sqrt{3}}{2} i . \nonumber \]

    The interesting idea here is to determine in the complex plane which initial values of \(z_{0}\) converge using Newton’s method to which of the three cube roots of unity.

    Newton’s method generalized to the complex plane is

    \[z_{n+1}=z_{n}-\frac{z_{n}^{3}-1}{3 z_{n}^{2}} . \nonumber \]

    If the iteration converges to \(r_{1}\), we color \(z_{0}\) red; \(r_{2}\), blue; \(r_{3}\), green. The result will be shown in lecture.

    Order of convergence

    Let \(r\) be the root and \(x_{n}\) be the \(n\)th approximation to the root. Define the error as

    \[\epsilon_{n}=r-x_{n} . \nonumber \]

    If for large \(n\) we have the approximate relationship

    \[\left|\epsilon_{n+1}\right|=k\left|\epsilon_{n}\right|^{p}, \nonumber \]

    with \(k\) a positive constant and \(p \geq 1\), then we say the root-finding numerical method is of order \(p\). Larger values of \(p\) correspond to faster convergence to the root. The order of convergence of bisection is one: the error is reduced by approximately a factor of 2 with each iteration so that

    \[\left|\epsilon_{n+1}\right|=\frac{1}{2}\left|\epsilon_{n}\right| \text {. } \nonumber \]

    We now find the order of convergence for Newton’s Method and for the Secant Method.

    Newton’s Method

    We start with Newton’s Method

    \[x_{n+1}=x_{n}-\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)}, \nonumber \]

    where the root \(r\) satisfies \(f(r)=0\). Subtracting both sides from \(r\), we have

    \[r-x_{n+1}=r-x_{n}+\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)}, \nonumber \]

    or

    \[\epsilon_{n+1}=\epsilon_{n}+\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)} . \nonumber \]

    We use Taylor series to expand the functions \(f\left(x_{n}\right)\) and \(f^{\prime}\left(x_{n}\right)\) about the root \(r\), using \(f(r)=0\). We have

    \[\begin{align} \nonumber f\left(x_{n}\right) &=f(r)+\left(x_{n}-r\right) f^{\prime}(r)+\frac{1}{2}\left(x_{n}-r\right)^{2} f^{\prime \prime}(r)+\ldots, \\ &=-\epsilon_{n} f^{\prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime}(r)+\ldots ; \\ \nonumber f^{\prime}\left(x_{n}\right) &=f^{\prime}(r)+\left(x_{n}-r\right) f^{\prime \prime}(r)+\frac{1}{2}\left(x_{n}-r\right)^{2} f^{\prime \prime \prime}(r)+\ldots, \\ \nonumber &=f^{\prime}(r)-\epsilon_{n} f^{\prime \prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime \prime}(r)+\ldots . \end{align} \nonumber \]

    To make further progress, we will make use of the following standard Taylor series:

    \[\frac{1}{1-\epsilon}=1+\epsilon+\epsilon^{2}+\ldots, \nonumber \]

    which converges for \(|\epsilon|<1\). Substituting (2.21) into (2.20), and using (2.22) yields

    \[\begin{aligned} \epsilon_{n+1} &=\epsilon_{n}+\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)} \\ &=\epsilon_{n}+\frac{-\epsilon_{n} f^{\prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime}(r)+\ldots}{f^{\prime}(r)-\epsilon_{n} f^{\prime \prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime \prime}(r)+\ldots} \\ &=\epsilon_{n}+\frac{-\epsilon_{n}+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime}(r)}{1-\epsilon_{n} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}+\ldots} \\ &=\epsilon_{n}+\left(-\epsilon_{n}+\frac{1}{2} \epsilon_{n}^{2} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}+\ldots\right)\left(1+\epsilon_{n} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}+\ldots\right) \\ &=\epsilon_{n}+\left(-\epsilon_{n}+\epsilon_{n}^{2}\left(\frac{1}{2} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}-\frac{f^{\prime \prime}(r)}{f^{\prime}(r)}\right)+\ldots\right) \\ &=-\frac{1}{2} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)} \epsilon_{n}^{2}+\ldots \end{aligned} \nonumber \]

    Therefore, we have shown that

    \[\left|\epsilon_{n+1}\right|=k\left|\epsilon_{n}\right|^{2} \nonumber \]

    as \(n \rightarrow \infty\), with

    \[k=\frac{1}{2}\left|\frac{f^{\prime \prime}(r)}{f^{\prime}(r)}\right|, \nonumber \]

    provided \(f^{\prime}(r) \neq 0\). Newton’s method is thus of order 2 at simple roots.

    Secant Method

    Determining the order of the Secant Method proceeds in a similar fashion. We start with

    \[x_{n+1}=x_{n}-\frac{\left(x_{n}-x_{n-1}\right) f\left(x_{n}\right)}{f\left(x_{n}\right)-f\left(x_{n-1}\right)} . \nonumber \]

    We subtract both sides from \(r\) and make use of

    \[\begin{aligned} x_{n}-x_{n-1} &=\left(r-x_{n-1}\right)-\left(r-x_{n}\right) \\ &=\epsilon_{n-1}-\epsilon_{n} \end{aligned} \nonumber \]

    and the Taylor series

    \[\begin{aligned} f\left(x_{n}\right) &=-\epsilon_{n} f^{\prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime}(r)+\ldots, \\ f\left(x_{n-1}\right) &=-\epsilon_{n-1} f^{\prime}(r)+\frac{1}{2} \epsilon_{n-1}^{2} f^{\prime \prime}(r)+\ldots, \end{aligned} \nonumber \]

    so that

    \[\begin{aligned} f\left(x_{n}\right)-f\left(x_{n-1}\right) &=\left(\epsilon_{n-1}-\epsilon_{n}\right) f^{\prime}(r)+\frac{1}{2}\left(\epsilon_{n}^{2}-\epsilon_{n-1}^{2}\right) f^{\prime \prime}(r)+\ldots \\ &=\left(\epsilon_{n-1}-\epsilon_{n}\right)\left(f^{\prime}(r)-\frac{1}{2}\left(\epsilon_{n-1}+\epsilon_{n}\right) f^{\prime \prime}(r)+\ldots\right) \end{aligned} \nonumber \]

    We therefore have

    \[\begin{aligned} \epsilon_{n+1} &=\epsilon_{n}+\frac{-\epsilon_{n} f^{\prime}(r)+\frac{1}{2} \epsilon_{n}^{2} f^{\prime \prime}(r)+\ldots}{f^{\prime}(r)-\frac{1}{2}\left(\epsilon_{n-1}+\epsilon_{n}\right) f^{\prime \prime}(r)+\ldots} \\ &=\epsilon_{n}-\epsilon_{n} \frac{1-\frac{1}{2} \epsilon_{n} \frac{f^{\prime \prime}(r)}{1-\frac{1}{2}\left(\epsilon_{n-1}+\epsilon_{n}\right)}+\ldots}{f^{\prime \prime}(r)}+\ldots \\ &=\epsilon_{n}-\epsilon_{n}\left(1-\frac{1}{2} \epsilon_{n} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}+\ldots\right)\left(1+\frac{1}{2}\left(\epsilon_{n-1}+\epsilon_{n}\right) \frac{f^{\prime \prime}(r)}{f^{\prime}(r)}+\ldots\right) \\ &=-\frac{1}{2} \frac{f^{\prime \prime}(r)}{f^{\prime}(r)} \epsilon_{n-1} \epsilon_{n}+\ldots \end{aligned} \nonumber \]

    or to leading order

    \[\left|\epsilon_{n+1}\right|=\frac{1}{2}\left|\frac{f^{\prime \prime}(r)}{f^{\prime}(r)}\right|\left|\epsilon_{n-1}\right|\left|\epsilon_{n}\right| . \nonumber \]

    The order of convergence is not yet obvious from this equation, but should be less than quadratic because \(\left|\epsilon_{n-1}\right|>\left|\epsilon_{n}\right|\). To determine the scaling law we look for a solution of the form

    \[\left|\epsilon_{n+1}\right|=k\left|\epsilon_{n}\right|^{p} . \nonumber \]

    From this ansatz, we also have

    \[\left|\epsilon_{n}\right|=k\left|\epsilon_{n-1}\right|^{p}, \nonumber \]

    and therefore

    \[\left|\epsilon_{n+1}\right|=k^{p+1}\left|\epsilon_{n-1}\right|^{p^{2}} . \nonumber \]

    Substitution into (2.26) results in

    \[k^{p+1}\left|\epsilon_{n-1}\right|^{p^{2}}=\frac{k}{2}\left|\frac{f^{\prime \prime}(r)}{f^{\prime}(r)}\right|\left|\epsilon_{n-1}\right|^{p+1} . \nonumber \]

    Equating the coefficient and the power of \(\epsilon_{n-1}\) results in

    \[k^{p}=\frac{1}{2}\left|\frac{f^{\prime \prime}(r)}{f^{\prime}(r)}\right|, \nonumber \]

    and

    \[p^{2}=p+1 . \nonumber \]

    The order of convergence of the Secant Method, given by \(p\), is therefore the positive root of the quadratic equation \(p^{2}-p-1=0\), or

    \[p=\frac{1+\sqrt{5}}{2} \approx 1.618 \text {, } \nonumber \]

    which coincidentally is a famous irrational number that is called The Golden Ratio, and goes by the symbol \(\Phi\). We see that the Secant Method has an order of convergence lying between the Bisection Method and Newton’s Method.


    This page titled 2: Root Finding is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?