Skip to main content
Mathematics LibreTexts

1.8: Review and summary

  • Page ID
    108074
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Review and summary

    Here are the important ideas discussed and points made in this booklet.

    • Many different ODE solvers of varying complexity and accuracy were presented. A summary table of the methods is shown in [tab:8.1].
    • ODE solvers are broadly split into two categories: explicit and implicit methods. Explicit methods are those where the step forward in time (i.e. the future) may be calculated using the known values of the function \(y\) (i.e. the present and the past). Implicit methods are those where the step forward in time is calculated using the past, present and future simultaneously. Implicit methods require running a rootfinder to make a step, explicit methods do not.
    • Different ODE solvers have different levels of accuracy. The accuracy of a particular method depends upon the stepsize \(h\). Typically, a solver provides more accurate solutions for smaller \(h\), but at the expense of increased runtime. The scaling of solver error usually behaves as \(O(h^p)\) where \(p\) is the "order" of the solver. Higher \(p\) implies better accuracy.
    • A different way to categorize ODE solvers is one-step vs. multistep methods. One-step methods take a step to \(t_{n+1}\) based on information available only at time \(t_n\) (and possibly some trial intermediate points). The Runge-Kutta methods are examples of one-step methods. Multistep methods take a step based on information from previous times \(t_n\), \(t_{n-1}\), \(t_{n-2}\), etc. The Adams-Bashforth methods are examples of explicit linear multistep methods, and Adams-Moulton methods are implicit multistep methods.
    • The stability of a method refers whether a perturbation grows or shrinks as \(t \rightarrow \infty\). Different methods have different "stability domains", which are plots made by assuming \(h\) is complex. The plot shows what regions of the complex plane correspond to stability or instability.
    • Adaptive step size methods make an estimate of the error incurred at each step, and adjust the stepsize up or down to maintain a given error tolerance. Matlab’s ode45 is a famous adaptive step size solver. It is based on the Dormund-Prince 4/5 algorithm, a member of the Runge-Kutta family.
    • Symplectic integrators are a subset of ODE solvers used in situations where the ODE is known to conserve energy.
    Name Type Accuracy Iteration
    Foward Euler Explicit \(O(h)\) \(y_{n+1} = y_n + h f(t_n, y_n)\)
    Backward Euler Implicit \(O(h)\) \(y_{n+1} = y_n + h f(t_{n+1}, y_{n+1})\)
    Heun's method Explicit, one-step \(O(h^2)\) \begin{aligned} s_1 &= h f(t_n, y_n) \\ \tilde{y} &= y_n + s_1 \\ s_2 &= h f(t_n, \tilde{y}) \\ y_{n+1} &= y_n + (h/2)(s_1 + s_2) \end{aligned}
    Midpoint rule Explicit, one-step \(O(h^2)\) \begin{aligned} k_1 &= h f(t_n, y_n) \\ k_2 &= h f(t_n+h/2, y_n + k_1/2) \\ y_{n+1} &= y_n + k_2 \end{aligned}

    Fourth-order

    Runge-Kuttal

    Explicit, one-step \(O(h^4)\) \begin{aligned} k_1 &= h f(t_n, y_n) \\ k_2 &= h f(t_n+h/2, y_n + k_1/2) \\ k_3 &= h f(t_n+h/2, y_n + k_2/2) \\ k_4 &= h f(t_n+h, y_n + k_3) \\ y_{n+1} &= y_n + (k_1 + 2 k_2 + 2 k_3 + k_4)/6 \end{aligned}
    Trapezoidal rule Implicit \(O(h^2)\) \(y_{n+1} = \frac{h}{2} \left( f_{n+1} + f_n \right) + y_n\)

    Second-order

    Adams-Bashforth

    Explicit, multistep \(O(h^2)\) \(y_{n+2} = h \left( \frac{3}{2} f_{n+1} - \frac{1}{2} f_{n} \right) + y_{n+1}\)

    Third-order

    Adams-Bashforth

    Explicit, multistep \(O(h^3)\) \begin{aligned} y_{n+3} = & h \left( \frac{5}{12} f_{n} -\frac{4}{3} f_{n+1} +\frac{23}{12} f_{n+2} \right) \\ & + y_{n+2} \end{aligned}

    Third-order

    Adams-Moulton

    Implicit, multistep \(O(h^4)\) \begin{aligned} y_{n+2} = & h \left( -\frac{1}{12} f_n + \frac{2}{3} f_{n+1} + \frac{5}{12} f_{n+2} \right) \\ & + y_{n+1} \end{aligned}
    Symplectic Euler Semi-implicit, 2nd order ODEs \(O(h)\) \begin{aligned} v_{n+1} &= v_n + h g(t_n, u_n) \\ u_{n+1} &= u_n + h f(t_n, v_{n+1}) \end{aligned}

    Guide to Matlab codes

    A set of Matlab programs accompanies this booklet. Each program demonstrates one of the solvers covered in the text. Plots of solutions shown in the text were generated using these Matlab programs. The programs are available online for students to study at https://github.com/brorson/DifferentialEquationsBook.

    Footer © 2022 GitHub, Inc. Footer navigation Terms Privacy Security Status Docs Contact GitHub Pricing API Training Blog About


    This page titled 1.8: Review and summary is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by Stuart Brorson.

    • Was this article helpful?