Skip to main content
Mathematics LibreTexts

C.5 The Error Behaviour of the Secant Method

  • Page ID
    89672
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Let \(f(x)\) have two continuous derivatives, and let \(r\) be any solution of \(f(x)=0\text{.}\) We will now get a pretty good handle on the error behaviour of the secant method near \(r\text{.}\)

    Denote by \(\tilde\varepsilon_n=x_n-r\) the (signed) error in \(x_n\) and by \(\varepsilon_n=|x_n-r|\) the (absolute) error in \(x_n\text{.}\) Then, \(x_n =r+\tilde\varepsilon_n\text{,}\) and, by (C.4.1),

    \begin{align*} \tilde\varepsilon_{n+1} & = \frac{x_{n-1} f(x_n) - x_n f(x_{n-1}) }{f(x_n)-f(x_{n-1})} -r \\ & = \frac{[r+\tilde\varepsilon_{n-1}] f(x_n) - [r+\tilde\varepsilon_n] f(x_{n-1}) } {f(x_n)-f(x_{n-1})} -r \\ & = \frac{\tilde\varepsilon_{n-1} f(x_n) - \tilde\varepsilon_n f(x_{n-1}) } {f(x_n)-f(x_{n-1})} \end{align*}

    By the Taylor expansion (3.4.32) and the mean value theorem (Theorem 2.13.5),

    \begin{align*} f(x_n)& = f(r) + f'(r)\tilde\varepsilon_n +\frac{1}{2} f''(c_1) \tilde\varepsilon_n^2 \\ & = f'(r)\tilde\varepsilon_n +\frac{1}{2} f''(c_1) \tilde\varepsilon_n^2 \\ f(x_n)-f(x_{n-1})& = f'(c_2)[x_n-x_{n-1}] \\ & = f'(c_2)[\tilde\varepsilon_n-\tilde\varepsilon_{n-1}] \end{align*}

    for some \(c_1\) between \(r\) and \(x_n\) and some \(c_2\) between \(x_{n-1}\) and \(x_n\text{.}\) So, for \(x_{n-1}\) and \(x_n\) near \(r\text{,}\) \(c_1\) and \(c_2\) also have to be near \(r\) and

    \begin{align*} f(x_n)& \approx f'(r)\tilde\varepsilon_n +\frac{1}{2} f''(r) \tilde\varepsilon_n^2 \\ f(x_{n-1})& \approx f'(r)\tilde\varepsilon_{n-1} +\frac{1}{2} f''(r) \tilde\varepsilon_{n-1}^2 \\ f(x_n)-f(x_{n-1})& \approx f'(r)[\tilde\varepsilon_n-\tilde\varepsilon_{n-1}] \end{align*}

    and

    \begin{align*} \tilde\varepsilon_{n+1} & = \frac{\tilde\varepsilon_{n-1} f(x_n) - \tilde\varepsilon_n f(x_{n-1}) } {f(x_n)-f(x_{n-1})} \\ & \approx \frac{ \tilde\varepsilon_{n-1} [f'(r)\tilde\varepsilon_n +\frac{1}{2} f''(r) \tilde\varepsilon_n^2] - \tilde\varepsilon_n [f'(r)\tilde\varepsilon_{n-1} +\frac{1}{2} f''(r) \tilde\varepsilon_{n-1}^2] } { f'(r)[\tilde\varepsilon_n-\tilde\varepsilon_{n-1}] } \\ & = \frac{\frac{1}{2}\tilde\varepsilon_{n-1}\tilde\varepsilon_nf''(r)[\tilde\varepsilon_n-\tilde\varepsilon_{n-1}]} { f'(r)[\tilde\varepsilon_n-\tilde\varepsilon_{n-1}] } \\ & = \frac{f''(r)} {2f'(r)} \tilde\varepsilon_{n-1}\tilde\varepsilon_n \end{align*}

    Taking absolute values, we have

    \[ \varepsilon_{n+1}\approx K \varepsilon_{n-1}\varepsilon_n\qquad \text{with }K = \left|\frac{f''(r)} {2f'(r)} \right| \tag{E7} \nonumber \]

    We have seen that Newton's method obeys a similar formula — (E3) says that, when \(x_n\) is near \(r\text{,}\) Newton's method obeys \(\varepsilon_{n+1}\approx K\varepsilon_n^2\text{,}\) also with \(K = \left|\frac{f''(r)} {2f'(r)} \right|\text{.}\) As we shall now see, the change from \(\varepsilon_n^2\text{,}\) in \(\varepsilon_{n+1}\approx K\varepsilon_n^2\text{,}\) to \(\varepsilon_{n-1}\varepsilon_n\text{,}\) in \(\varepsilon_{n+1}\approx K\varepsilon_{n-1}\varepsilon_n\text{,}\) does have a substantial impact on the behaviour of \(\varepsilon_n\) for large \(n\text{.}\)

    To see the large \(n\) behaviour, we now iterate (E7). The formulae will look simpler if we multiply (E7) by \(K\) and write \(\delta_n=K\varepsilon_n\text{.}\) Then (E7) becomes \(\delta_{n+1}\approx\delta_{n-1}\delta_n\) (and we have eliminated \(K\)). The first iterations are

    \begin{alignat*}{2} \delta_2& & & \approx \delta_0\delta_1 \\ \delta_3& \approx \delta_1\delta_2 & & \approx \delta_0\delta_1^2 \\ \delta_4& \approx \delta_2\delta_3 & & \approx \delta_0^2\delta_1^3 \\ \delta_5& \approx \delta_3\delta_4 & & \approx \delta_0^3\delta_1^5 \\ \delta_6& \approx \delta_4\delta_5 & & \approx \delta_0^5\delta_1^8 \\ \delta_7& \approx \delta_5\delta_6 & & \approx \delta_0^8\delta_1^{13} \end{alignat*}

    Notice that every \(\delta_n\) is of the form \(\delta_0^{\alpha_n}\delta_1^{\beta_n}\text{.}\) Substituting \(\delta_n=\delta_0^{\alpha_n}\delta_1^{\beta_n}\) into \(\delta_{n+1}\approx\delta_{n-1}\delta_n\) gives

    \[ \delta_0^{\alpha_{n+1}}\delta_1^{\beta_{n+1}} \approx \delta_0^{\alpha_{n-1}}\delta_1^{\beta_{n-1}} \delta_0^{\alpha_n}\delta_1^{\beta_n} \nonumber \]

    and we have

    \[ \alpha_{n+1}=\alpha_{n-1}+\alpha_{n} \qquad \beta_{n+1}=\beta_{n-1}+\beta_{n} \tag{E8} \nonumber \]

    The recursion rule in (E8) is famous 1. The Fibonacci 2 sequence (which is \(0\text{,}\) \(1\text{,}\) \(1\text{,}\) \(2\text{,}\) \(3\text{,}\) \(5\text{,}\) \(8\text{,}\) \(13\text{,}\) \(\cdots\)), is defined by

    \begin{align*} F_0& =0 \\ F_1& =1 \\ F_n& =F_{n-1}+F_{n-2}\qquad\text{for }n>1 \end{align*}

    So, for \(n\ge 2\text{,}\) \(\alpha_n = F_{n-1}\) and \(\beta_n=F_n\) and

    \[ \delta_n \approx \delta_0^{\alpha_n}\delta_1^{\beta_n} = \delta_0^{F_{n-1}}\delta_1^{F_n} \nonumber \]

    One of the known properties of the Fibonacci sequence is that, for large \(n\text{,}\)

    \[ F_n\approx\frac{\varphi^n}{\sqrt{5}}\qquad\text{where } \varphi=\frac{1+\sqrt{5}}{2} \approx 1.61803 \nonumber \]

    This \(\varphi\) is the golden ratio 3. So, for large \(n\text{,}\)

    \begin{align*} K\varepsilon_n & = \delta_n \approx \delta_0^{F_{n-1}}\delta_1^{F_n} \approx \delta_0^{\frac{\varphi^{n-1}}{\sqrt{5}}} \delta_1^{\frac{\varphi^n}{\sqrt{5}}} = \delta_0^{\frac{1}{\sqrt{5}\varphi}\times\varphi^n} \delta_1^{\frac{1}{\sqrt{5}}\times\varphi^n}\\ & = d^{\varphi^n}\qquad\text{where}\quad d=\delta_0^{\frac{1}{\sqrt{5}\,\varphi}}\delta_1^{\frac{1}{\sqrt{5}}}\\ & \approx d^{1.6^n} \end{align*}

    Assuming that \(0\lt \delta_0=K\varepsilon_0\lt 1\) and \(0\lt \delta_1=K\varepsilon_1\lt 1\text{,}\) we will have \(0\lt d\lt 1\text{.}\)

    By way of contrast, for Newton's method, for large \(n\text{,}\)

    \begin{gather*} K\varepsilon_n\approx d^{2^n}\qquad\text{where}\quad d=(K\varepsilon_1)^{1/2} \end{gather*}

    As \(2^n\) grows quite a bit more quickly than \(1.6^n\) (for example, when n=5, \(2^n=32\) and \(1.6^n=10.5\text{,}\) and when \(n=10\text{,}\) \(2^n=1024\) and \(1.6^n=110\)) Newton's method homes in on the root quite a bit faster than the secant method, assuming that you start reasonably close to the root.


    This page titled C.5 The Error Behaviour of the Secant Method is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Joel Feldman, Andrew Rechnitzer and Elyse Yeager via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?