Skip to main content
Mathematics LibreTexts

7.2: Numerical Methods - Initial Value Problem

  • Page ID
    96065
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We begin with the simple Euler method, then discuss the more sophisticated RungeKutta methods, and conclude with the Runge-Kutta-Fehlberg method, as implemented in the MATLAB function ode45.m. Our differential equations are for \(x=\) \(x(t)\), where the time \(t\) is the independent variable, and we will make use of the notation \(\dot{x}=d x / d t\). This notation is still widely used by physicists and descends directly from the notation originally used by Newton.

    7.2.1. Euler method

    The Euler method is the most straightforward method to integrate a differential equation. Consider the first-order differential equation

    \[\dot{x}=f(t, x), \nonumber \]

    with the initial condition \(x(0)=x_{0}\). Define \(t_{n}=n \Delta t\) and \(x_{n}=x\left(t_{n}\right)\). A Taylor series expansion of \(x_{n+1}\) results in

    \[\begin{aligned} x_{n+1} &=x\left(t_{n}+\Delta t\right) \\ &=x\left(t_{n}\right)+\Delta t \dot{x}\left(t_{n}\right)+\mathrm{O}\left(\Delta t^{2}\right) \\ &=x\left(t_{n}\right)+\Delta t f\left(t_{n}, x_{n}\right)+\mathrm{O}\left(\Delta t^{2}\right) \end{aligned} \nonumber \]

    The Euler Method is therefore written as

    \[x_{n+1}=x\left(t_{n}\right)+\Delta t f\left(t_{n}, x_{n}\right) \nonumber \]

    We say that the Euler method steps forward in time using a time-step \(\Delta t\), starting from the initial value \(x_{0}=x(0)\). The local error of the Euler Method is \(\mathrm{O}\left(\Delta t^{2}\right)\). The global error, however, incurred when integrating to a time \(T\), is a factor of \(1 / \Delta t\) larger and is given by \(\mathrm{O}(\Delta t)\). It is therefore customary to call the Euler Method a first-order method.

    7.2.2. Modified Euler method

    This method is of a type that is called a predictor-corrector method. It is also the first of what are Runge-Kutta methods. As before, we want to solve (7.3). The idea is to average the value of \(\dot{x}\) at the beginning and end of the time step. That is, we would like to modify the Euler method and write

    \[x_{n+1}=x_{n}+\frac{1}{2} \Delta t\left(f\left(t_{n}, x_{n}\right)+f\left(t_{n}+\Delta t, x_{n+1}\right)\right) \nonumber \]

    The obvious problem with this formula is that the unknown value \(x_{n+1}\) appears on the right-hand-side. We can, however, estimate this value, in what is called the predictor step. For the predictor step, we use the Euler method to find

    \[x_{n+1}^{p}=x_{n}+\Delta t f\left(t_{n}, x_{n}\right) \nonumber \]

    The corrector step then becomes

    \[x_{n+1}=x_{n}+\frac{1}{2} \Delta t\left(f\left(t_{n}, x_{n}\right)+f\left(t_{n}+\Delta t, x_{n+1}^{p}\right)\right) \nonumber \]

    The Modified Euler Method can be rewritten in the following form that we will later identify as a Runge-Kutta method:

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\Delta t, x_{n}+k_{1}\right) \\ x_{n+1} &=x_{n}+\frac{1}{2}\left(k_{1}+k_{2}\right) \end{aligned} \nonumber \]

    7.2.3. Second-order Runge-Kutta methods

    We now derive all second-order Runge-Kutta methods. Higher-order methods can be similarly derived, but require substantially more algebra.

    We consider the differential equation given by (7.3). A general second-order Runge-Kutta method may be written in the form

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\alpha \Delta t, x_{n}+\beta k_{1}\right) \\ x_{n+1} &=x_{n}+a k_{1}+b k_{2} \end{aligned} \nonumber \]

    with \(\alpha, \beta, a\) and \(b\) constants that define the particular second-order Runge-Kutta method. These constants are to be constrained by setting the local error of the second-order Runge-Kutta method to be \(\mathrm{O}\left(\Delta t^{3}\right)\). Intuitively, we might guess that two of the constraints will be \(a+b=1\) and \(\alpha=\beta\).

    We compute the Taylor series of \(x_{n+1}\) directly, and from the Runge-Kutta method, and require them to be the same to order \(\Delta t^{2}\). First, we compute the Taylor series of \(x_{n+1}\). We have

    \[\begin{aligned} x_{n+1} &=x\left(t_{n}+\Delta t\right) \\ &=x\left(t_{n}\right)+\Delta t \dot{x}\left(t_{n}\right)+\frac{1}{2}(\Delta t)^{2} \ddot{x}\left(t_{n}\right)+\mathrm{O}\left(\Delta t^{3}\right) \end{aligned} \nonumber \]

    Now,

    \[\dot{x}\left(t_{n}\right)=f\left(t_{n}, x_{n}\right) . \nonumber \]

    The second derivative is more complicated and requires partial derivatives. We have

    \[\begin{aligned} \ddot{x}\left(t_{n}\right) &\left.=\frac{d}{d t} f(t, x(t))\right]_{t=t_{n}} \\ &=f_{t}\left(t_{n}, x_{n}\right)+\dot{x}\left(t_{n}\right) f_{x}\left(t_{n}, x_{n}\right) \\ &=f_{t}\left(t_{n}, x_{n}\right)+f\left(t_{n}, x_{n}\right) f_{x}\left(t_{n}, x_{n}\right) \end{aligned} \nonumber \]

    Therefore,

    \[x_{n+1}=x_{n}+\Delta t f\left(t_{n}, x_{n}\right)+\frac{1}{2}(\Delta t)^{2}\left(f_{t}\left(t_{n}, x_{n}\right)+f\left(t_{n}, x_{n}\right) f_{x}\left(t_{n}, x_{n}\right)\right) \nonumber \]

    Second, we compute \(x_{n+1}\) from the Runge-Kutta method given by (7.5). Substituting in \(k_{1}\) and \(k_{2}\), we have

    \[x_{n+1}=x_{n}+a \Delta t f\left(t_{n}, x_{n}\right)+b \Delta t f\left(t_{n}+\alpha \Delta t, x_{n}+\beta \Delta t f\left(t_{n}, x_{n}\right)\right) . \nonumber \]

    We Taylor series expand using

    \[\begin{aligned} f\left(t_{n}+\alpha \Delta t, x_{n}+\beta \Delta t f\left(t_{n}, x_{n}\right)\right) & \\ =f\left(t_{n}, x_{n}\right)+\alpha \Delta t f_{t}\left(t_{n}, x_{n}\right)+\beta \Delta t f\left(t_{n}, x_{n}\right) f_{x}\left(t_{n}, x_{n}\right)+\mathrm{O}\left(\Delta t^{2}\right) \end{aligned} \nonumber \]

    The Runge-Kutta formula is therefore

    \[\begin{aligned} x_{n+1}=x_{n}+(a+b) & \Delta t f\left(t_{n}, x_{n}\right) \\ &+(\Delta t)^{2}\left(\alpha b f_{t}\left(t_{n}, x_{n}\right)+\beta b f\left(t_{n}, x_{n}\right) f_{x}\left(t_{n}, x_{n}\right)\right)+\mathrm{O}\left(\Delta t^{3}\right) \end{aligned} \nonumber \]

    Comparing (7.6) and (7.7), we find

    \[\begin{aligned} a+b &=1 \\ \alpha b &=1 / 2 \\ \beta b &=1 / 2 \end{aligned} \nonumber \]

    There are three equations for four parameters, and there exists a family of secondorder Runge-Kutta methods.

    The Modified Euler Method given by (7.4) corresponds to \(\alpha=\beta=1\) and \(a=\) \(b=1 / 2\). Another second-order Runge-Kutta method, called the Midpoint Method, corresponds to \(\alpha=\beta=1 / 2, a=0\) and \(b=1 .\) This method is written as

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{1}\right) \\ x_{n+1} &=x_{n}+k_{2} \end{aligned} \nonumber \]

    7.2.4. Higher-order Runge-Kutta methods

    The general second-order Runge-Kutta method was given by (7.5). The general form of the third-order method is given by

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\alpha \Delta t, x_{n}+\beta k_{1}\right) \\ k_{3} &=\Delta t f\left(t_{n}+\gamma \Delta t, x_{n}+\delta k_{1}+\epsilon k_{2}\right) \\ x_{n+1} &=x_{n}+a k_{1}+b k_{2}+c k_{3} \end{aligned} \nonumber \]

    The following constraints on the constants can be guessed: \(\alpha=\beta, \gamma=\delta+\epsilon\), and \(a+b+c=1\). Remaining constraints need to be derived.

    The fourth-order method has a \(k_{1}, k_{2}, k_{3}\) and \(k_{4}\). The fifth-order method requires up to \(k_{6}\). The table below gives the order of the method and the number of stages required.

    order 2 3 4 5 6 7 8
    minimum # stages 2 3 4 6 7 9 11

    Because of the jump in the number of stages required between the fourth-order and fifth-order method, the fourth-order Runge-Kutta method has some appeal. The general fourth-order method starts with 13 constants, and one then finds 11 constraints. A particularly simple fourth-order method that has been widely used is given by

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{1}\right) \\ k_{3} &=\Delta t f\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{2}\right) \\ k_{4} &=\Delta t f\left(t_{n}+\Delta t, x_{n}+k_{3}\right) \\ x_{n+1} &=x_{n}+\frac{1}{6}\left(k_{1}+2 k_{2}+2 k_{3}+k_{4}\right) \end{aligned} \nonumber \]

    7.2.5. Adaptive Runge-Kutta Methods

    As in adaptive integration, it is useful to devise an ode integrator that automatically finds the appropriate \(\Delta t\). The Dormand-Prince Method, which is implemented in MATLAB’s ode45.m, finds the appropriate step size by comparing the results of a fifth-order and fourth-order method. It requires six function evaluations per time step, and constructs both a fifth-order and a fourth-order method from these function evaluations.

    Suppose the fifth-order method finds \(x_{n+1}\) with local error \(\mathrm{O}\left(\Delta t^{6}\right)\), and the fourth-order method finds \(x_{n+1}^{\prime}\) with local error \(\mathrm{O}\left(\Delta t^{5}\right)\). Let \(\varepsilon\) be the desired error tolerance of the method, and let \(e\) be the actual error. We can estimate \(e\) from the difference between the fifth-and fourth-order methods; that is,

    \[e=\left|x_{n+1}-x_{n+1}^{\prime}\right| \nonumber \]

    Now \(e\) is of \(\mathrm{O}\left(\Delta t^{5}\right)\), where \(\Delta t\) is the step size taken. Let \(\Delta \tau\) be the estimated step size required to get the desired error \(\varepsilon\). Then we have

    \[e / \varepsilon=(\Delta t)^{5} /(\Delta \tau)^{5} \nonumber \]

    or solving for \(\Delta \tau\),

    \[\Delta \tau=\Delta t\left(\frac{\varepsilon}{e}\right)^{1 / 5} \nonumber \]

    On the one hand, if \(e<\varepsilon\), then we accept \(x_{n+1}\) and do the next time step using the larger value of \(\Delta \tau\). On the other hand, if \(e>\varepsilon\), then we reject the integration step and redo the time step using the smaller value of \(\Delta \tau\). In practice, one usually increases the time step slightly less and decreases the time step slightly more to prevent the waste of too many failed time steps.

    7.2.6. System of differential equations

    Our numerical methods can be easily adapted to solve higher-order differential equations, or equivalently, a system of differential equations. First, we show how a second-order differential equation can be reduced to two first-order equations. Consider

    \[\ddot{x}=f(t, x, \dot{x}) . \nonumber \]

    This second-order equation can be rewritten as two first-order equations by defining \(u=\dot{x} .\) We then have the system

    \[\begin{aligned} &\dot{x}=u, \\ &\dot{u}=f(t, x, u) . \end{aligned} \nonumber \]

    This trick also works for higher-order equation. For another example, the thirdorder equation

    \[\dddot x=f(t, x, \dot{x}, \ddot{x}), \nonumber \]

    can be written as

    \[\begin{aligned} &\dot{x}=u \\ &\dot{u}=v \\ &\dot{v}=f(t, x, u, v) . \end{aligned} \nonumber \]

    Now, we show how to generalize Runge-Kutta methods to a system of differential equations. As an example, consider the following system of two odes,

    \[\begin{aligned} &\dot{x}=f(t, x, y), \\ &\dot{y}=g(t, x, y), \end{aligned} \nonumber \]

    with the initial conditions \(x(0)=x_{0}\) and \(y(0)=y_{0}\). The generalization of the commonly used fourth-order Runge-Kutta method would be

    \[\begin{aligned} k_{1} &=\Delta t f\left(t_{n}, x_{n}, y_{n}\right) \\ l_{1} &=\Delta t g\left(t_{n}, x_{n}, y_{n}\right) \\ k_{2} &=\Delta t f\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{1}, y_{n}+\frac{1}{2} l_{1}\right) \\ l_{2} &=\Delta t g\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{1}, y_{n}+\frac{1}{2} l_{1}\right) \\ y_{3} &=\Delta t f\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{2}, y_{n}+\frac{1}{2} l_{2}\right) \\ l_{3} &=\Delta t g\left(t_{n}+\frac{1}{2} \Delta t, x_{n}+\frac{1}{2} k_{2}, y_{n}+\frac{1}{2} l_{2}\right) \\ y_{n+1} &=y_{n}+\frac{1}{6}\left(l_{1}+2 l_{2}+2 l_{3}+l_{4}\right) \\ k_{4} &=\Delta t f\left(t_{n}+\Delta t, x_{n}+k_{3}, y_{n}+l_{3}\right) \\ l_{4} &=\Delta t g\left(t_{n}+\Delta t, x_{n}+k_{3}, y_{n}+l_{3}\right) \\ &=x_{n}+\frac{1}{6}\left(k_{1}+2 k_{2}+2 k_{3}+k_{4}\right) \\ &=\Delta \end{aligned} \nonumber \]


    This page titled 7.2: Numerical Methods - Initial Value Problem is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.