Loading [MathJax]/jax/output/HTML-CSS/jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

1.6: Linear multistep methods

( \newcommand{\kernel}{\mathrm{null}\,}\)

The cannonical form of a first order ODE is given in [eq:6.1]. Until now, the ODE solver algorithms we have examined started from approximating the derivative operator in [eq:6.1], then building on that approximation to create a solver. dydt=f(t,y) Now we’ll try a different approach. Instead of starting from the derivative, think about how we might construct an ODE solver by integrating both sides of [eq:6.1] and then creating a stepping method. Specifically, consider the equation yn+1=tn+1tnf(t,y(t))dt+yn This equation is derived from [eq:6.1] by integration. The problem with doing the integration directly is that the RHS requires knowledge of y(t) – the thing we are trying to find! But what happens if we chose an approximation for f and do the integration? It turns out that different multistep methods are created by using different approximations for f. We’ll look at a few methods in this chapter.

Trapezoidal method

A simple approximation to f in [eq:6.2] is to assume that points (tn,fn) and (tn+1,fn+1) are connected via a straight line. This is the case of linear interpolation between two points and is shown in 6.1. Linear interpolation between tn and tn+1 may be written in this form: f(t)=(1α)fn+αfn+1 where α[0,1]. Obviously, when α=0 we get f(t)=fn and when α=1 we get f(t)=fn+1. Since [eq:6.3] is linear in α, this expression draws a straight line between these two points.

Now consider what happens if we use the interpolation method [eq:6.3] in the integral [eq:6.2]. To make the algebra easier we recognize α=(ttn)/h so that hdα=dt.

Screen Shot 2022-07-26 at 12.17.44 PM.png

Figure 6.1: The trapezoidal method. The red dotted line is the linear interpolation. The grey region corresponds to the "area under the curve" which is computed by the integral [eq:IntegrateLinearInterp]. Note this method is implicit.

In this case, [eq:6.2] becomes yn+1=h10((1α)fn+αfn+1)dα+yn which integrates to yn+1=h2(fn+1+fn)+yn This is an iteration which takes us from tn to tn+1. Here are some things to note about [eq:6.5]:

  • This is an implicit method – to get yn+1 on the LHS you need to know fn+1 which depends upon yn+1. Fortunately, we already know how to deal with implicit methods – use a rootfinder like Newton’s method.
  • The equation whose root to find is g(yn+1)=yn+1h2(fn+1+fn)yn=0 where fn+1=f(tn+1,y(tn+1)) and fn=f(tn,y(tn)) as usual.
  • This method has approximated f(t) using a straight line between the two end points, and then integrated the approximation. This is exactly the same as the "trapezoidal method" for numerical integration which you may have seen in a previous numerical analysis class. Therefore, this ODE solver is called the "trapezoidal method".

Example: the exponential growth ODE

Screen Shot 2022-07-26 at 12.18.18 PM.png

Figure 6.2: The exponential growth ODE integrated using the trapezoidal method. The open circles are the computed results, the lines are the analytic solution. The fit is pretty good, at least on this scale.

Screen Shot 2022-07-26 at 12.18.46 PM.png

Figure 6.3: Error scaling of the trapezoidal method. The error scales as O(h2).

Adams-Bashforth method – 2nd order

The trapezoidal method uses linear interpolation between points at tn and tn+1. It is an implicit method, meaning that it assumes we know the end value yn+1 in order to compute the step using fn+1=f(tn+1,yn+1). However, the step computation is what is used to get the end value yn+1. Therefore, computing the step requires a solve operation to untangle yn+1 from fn+1. A solve operation is generally expensive – solvers like Newton’s method are iterative and one might need several iterations before the solution is found. Wouldn’t it be better to somehow use the linear interpolation to create an explicit method requiring no solver step?

The explicit 2nd order Adams-Bashforth method is such a method. It uses the known (previously computed) values at times tn and tn+1 to compute the value at tn+2. The idea is shown in 6.4. In a sense, Adams-Bashforth takes the interpolation formula [eq:6.3] defined on t[tn,tn+1] and

Screen Shot 2022-07-26 at 12.19.24 PM.png

Figure 6.4: The 2nd order Adams-Bashforth method. First, linear interpolation is performed between the points (tn,yn) and (tn+1,yn+1). Then the line is extended to tn+2 and the grey region under the curve is integrated per [eq:6.7] to give the new value yn+2.

extrapolates it forward to time tn+2. That is, it computes the integral

yn+2=tn+2tn+1fn,n+1(t,y(t))dt+yn+1 using the approximation to fn,n+1 based on the linear interpolation formula defined on t[tn,tn+1], [eq:6.3]. We transform the integral [eq:6.6] via the usual change of variables α=(ttn)/h to get yn+2=h21((1α)fn+αfn+1)dα+yn+1 Upon integration we get the second order Adams-Bashforth method,

yn+2=h(32fn+112fn)+yn+1

Notice that second-order Adams-Bashforth uses two samples of the function at tn and tn+1 to compute the future value at tn+2. This is an explicit method since we presumably already know fn+1 and fn when computing fn+2. But this begs the question, how to start this iteration? That is, we have only one initial condition, y0, but need two y values to iterate. The answer is that one can start with y0, then use a single step of e.g. forward Euler or Runge-Kutta to get y1, then continue the solution using Adams-Bashforth. This is the method employed in the Matlab program whose output is shown in 6.5. Note that the accuracy is O(h2), as expected from a second-order method.

Screen Shot 2022-07-26 at 12.20.55 PM.png

Figure 6.5: The exponential growth ODE integrated using 2nd order Adams-Bashforth.

Screen Shot 2022-07-26 at 12.21.26 PM.png

Figure 6.6: Error scaling of 2nd order Adams-Bashforth. The error scales as O(h2).

Adams-Bashforth method – 3rd order

The second-order Adams-Bashforth method used linear interpolation to approximate the function f(t,y) over the interval tnttn+1, then used that approximation in the integral [eq:6.2] to find yn+1. What if you want more accuracy? Can you use quadratic interpolation on three past points to find the future value? The answer is "yes", and the advantage to this is that the order of the method increases. In this section we will derive the third-order Adams-Bashforth method (quadratic interpolation).

Given two points (tn,yn) and (tn+1,yn+1) we can draw a straight line through them. Given three points, we can draw a quadratic (a parabola) through them. This is the process of interpolation which you have likely studied in a previous math class. Given a set of points, there are several ways to write down an interpolating polynomial which passes through them. In this case, we are given three points, (tn,yn), (tn+1,yn+1), and (tn+2,yn+2), and we will use the Lagrange interpolation polynomial to write down the quadratic. It is f(t)=(ttn+1)(ttn+2)(tntn+1)(tntn+2)fn+(ttn)(ttn+2)(tn+1tn)(tn+1tn+2)fn+1+(ttn)(ttn+1)(tn+2tn)(tn+2tn+1)fn+2 =(ttn+1)(ttn+2)2h2fn+(ttn)(ttn+2)h2fn+1+(ttn)(ttn+1)2h2fn+2 This expression might look nasty, but note that it is just a quadratic: Every term is second order in t. Now we insert this expression into the integral [eq:6.2] to find yn+3: yn+3=tn+3tn+2(ttn+1)(ttn+2)2h2fn+(ttn)(ttn+2)h2fn+1+(ttn)(ttn+1)2h2fn+2dt+yn The thing to note is that we use previous points (tn,yn), (tn+1,yn+1), (tn+2,yn+2) to create an approximate version of f(t,y) using interpolation. Then we integrate that approximation over the interval t[tn+2,tn+3] to get yn+3. The method is shown schematically in 6.7.

Screen Shot 2022-07-26 at 12.22.01 PM.png

Figure 6.7: The 3rd-order Adams-Bashforth method. The upward pointing blue arrows indicate that the three points fn, fn+1 and fn+2 are known and are used to create the quadratic interpolation polynomial for f(t,y) (red dotted line). That polynomial is integrated from tn+2 to tn+3 to get the new y value, y3.

We could carry out the integration immediately, but it would be rather messy. Therefore, we employ the usual change of variables α=(ttn+1)/h. Note that hdα=dt, and that we have mapped t=tnα=1, t=tn+1α=0, t=tn+2α=1, and t=tn+3α=2. Therefore, the integral becomes yn+3=h2112α(α1)fn(α+1)(α1)fn+1+12(α+1)αfn+2dα+yn or yn+3=h(512fn43fn+1+2312fn+2)+yn+2 is the third-order Adams-Bashforth method. As you might expect for a third-order method, the accuracy is order O(h3). Plots of its solution to the exponential growth ODE and error scaling are shown in 6.8 and 6.9.

Screen Shot 2022-07-26 at 12.22.49 PM.png

Figure 6.8: The exponential growth ODE integrated using 3rd order Adams-Bashforth.

Screen Shot 2022-07-26 at 12.23.12 PM.png

Figure 6.9: Error scaling of 3rd order Adams-Bashforth. The error scales as O(h3).

Adams-Moulton method – 3nd order

The Adams-Bashforth methods are explicit – they use the value of y at the present and past times to compute a the step into the future. Multistep methods may also be implicit – the trapezoidal method is an example of a implicit multistep method you have already see above. Higher order implicit multistep methods also exist. One class are the Adams-Molulton methods. Like Adams-Bashforth, they use interpolation to fit a polynomial to the previous f(t,y) values, but Adams-Moulton methods incorporate the future y into the interpolation too. The basic idea is shown in 6.10. The method interpolates the function f(t,y) at the points t0 (past), t1 (present), and t2 (future) using the Lagrange polynomial f(t)=(ttn+1)(ttn+2)(tntn+1)(tntn+2)fn+(ttn)(ttn+2)(tn+1tn)(tn+1tn+2)fn+1+(ttn)(ttn+1)(tn+2tn)(tn+2tn+1)fn+2 so the iteration is yn+2=tn+3tn+2(ttn+1)(ttn+2)2h2fn+(ttn)(ttn+2)h2fn+1+(ttn)(ttn+1)2h2fn+2dt+yn+1 Note that fn+2=f(tn+2,yn+2) so yn+2 appears on both sides of [eq:6.11] – the method is implicit. Nonetheless, we can treat it as a constant factor fn+2 and go ahead and perform the integral. Change variables as usual: α=(tt0)/h so hdα=dt and the integral becomes yn+2=h21(α1)(α2)2fnα(α2)fn+1+α(α1)2fn+2dt+yn+1

Screen Shot 2022-07-26 at 12.23.46 PM.png

Figure 6.10: Third-order Adams-Moulton method.

Evaluating each term yields the third-order Adams-Moulton method: yn+2=h(112fn+23fn+1+512fn+2)+yn+1 Again, since fn+2=f(tn+2,yn+2), the equation [eq:6.13] must be solved using a rootfinder to get yn+2, but you should be comfortable with such implicit methods by now.

Example: Adams methods and the Van der Pol oscillator

Here is a quick demonstration showing why implicit methods like Adams-Moulton are useful, despite requiring a rootfinder.

Screen Shot 2022-07-26 at 12.24.28 PM.png

Figure 6.11: Third-order Adams-Bashforth method solving the Van der Pol system. Stepsizes h=0.1 and 0.01 were used. The method is clearly unstable for h=0.1 – the y-axis of the graph bottoms out at -1e177, indicating that the solution has exploded! The method is stable for h=0.01 but is not as accurate as the Adams-Moulton method shown in 6.12.

Screen Shot 2022-07-26 at 12.25.00 PM.png

Figure 6.12: Third-order Adams-Moulton method solving the Van der Pol system. The stepsizes were h=0.1 and 0.01. In contrast to the Adams-Bashforth result shown in 6.11, the Adam-Moulton method remains stable for h=0.1.

I have written programs implementing 3rd order Adams-Bashforth and 3rd order Adams-Moulton to integrate the Van der Pol oscillator [eq:4.5]. Recall that the Van der Pol system is stiff, and good practice recommends you use implicit methods when integrating stiff ODEs. The reason is that implicit systems can use larger stepsizes without difficulty, so they can quickly solve a stiff system which would force an explicit method like Adams-Bashforth to use small steps.

Shown in 6.11 and 6.12 are plots of the Van der Pol solution computed by 3rd order Adams-Bashforth (explicit) vs. 3rd order Adams-Moulton (implicit). Both solvers used stepsize h=0.1. At first glance, the result of the Adams-Bashforth computation looks odd. The reason is that the Adams-Bashforth method became unstable, so its output exploded to roughly y=1e177 – the method is useless for Van der Pol at this stepsize. (Of course, you can decrease the stepsize at the cost of a longer simulation time. That’s not a problem for this toy problem, but if you are faced with a large computation which takes many hours, you want to squeeze as much performance out of your method as possible.) On the other hand, Adams-Moulton works just fine at this stepsize.

I should mention that there is a trade-off between these two methods: Adams-Bashforth requires much smaller stepsize to handle the Van der Pol system, but Adams-Moulton runs a nonlinear solver (Matlab’s fslove()) at each step. Solvers are typically iterative, so they can take significant time to converge on a solution. Therefore, Adams-Bashforth requires less computation per step, but is unstable unless the stepsize is small. On the other hand, the Adams-Moulton method takes longer per step, but remains stable for larger stepsizes. Which method to choose depends upon the particular details of your problem – you typically need to experiment with different methods and different stepsizes to find the best way to solve your particular problem.

Generalizations

There is a whole family of linear multistep methods of orders 2, 3, 4, 5, 6, and so on. The general pattern employed by these methods looks like this: α0yi+α1yi+1+α2yi+2++yi+k=h(β0fi+β1fi+1+β2fi+2++βkfi+k) These are called k-step methods. Some things to note:

  • The coefficient in front of the term yi+k on the LHS is one. This is the future value which we wish to find.
  • If we set all αj=0 except αk1 and additionally set βk=0 then we recover the explicit Adams-Bashforth methods.
  • If we set all αj=0 except αk1 and keep non-zero βk then we get the Adams-Moulton methods.
  • I won’t prove it here, but it can be shown that a k-step Adams-Bashforth method has order O(hk) while a k-step Adams-Moulton method has order O(hk+1). That’s another advantage of the implicit over the explicit methods.
  • Keeping additional αj terms gives so-called backward difference methods. Those methods are outside of the scope of this class.
  • Regarding stability of linear multistep methods, there is a large body of well-developed theory specifying conditions under which they are stable. That theory is also outside the scope of this class.

High-order linear multistep methods like these are implemented in high-end ODE solvers like the Sundials suite from Lawrence Livermore National Laboratories. Most likely, in your work you will just call such a "canned" solver rather than writing your own, so you are spared the complexities of implementing such a solver.

Chapter summary

Here are the important points made in this chapter:

  • Linear multistep methods are derived by interpolating previously-computed (and sometimes future) points and using the interpolation in [eq:6.2] to compute solution points in the future.
  • The order of linear multistep methods is increased by incorporating more interpolation points into the interpolated approximation for f(t,y). We examined the trapezoid (1st order), Adams-Bashforth (2nd and 3rd order), and Adams-Moulton (3rd order) methods.
  • Adams-Bashforth methods are explicit, Adams-Moulton are implicit.
  • Implicit methods like Adam-Moulton are recommended for stiff systems since they remain stable while also allowing for larger steps than explicit methods. However, implicit methods require running a rootfinder at every step, so a performance trade-off exists between explicit and implicit methods.
  • The methods examined in this chapter are members of a large class of linear multistep methods described by [eq:6.14]. Such methods are commonly provided in high-end ODE solvers.

This page titled 1.6: Linear multistep methods is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by Stuart Brorson.

Support Center

How can we help?