4: Differential Equations
( \newcommand{\kernel}{\mathrm{null}\,}\)
We now discuss the numerical solution of ordinary differential equations. We will include the initial value problem and the boundary value problem.
Initial value problem
We begin with the simple Euler method, then discuss the more sophisticated RungeKutta methods, and conclude with the Runge-Kutta-Fehlberg method, as implemented in the MATLAB function ode 45.m. Our differential equations are for x=x(t), where the time t is the independent variable.
Euler method
The Euler method is the most straightforward method to integrate a differential equation. Consider the first-order differential equation
˙x=f(t,x),
with the initial condition x(0)=x0. Define tn=nΔt and xn=x(tn). A Taylor series expansion of xn+1 results in
xn+1=x(tn+Δt)=x(tn)+Δt˙x(tn)+O(Δt2)=x(tn)+Δtf(tn,xn)+O(Δt2).
The Euler Method is therefore written as
xn+1=x(tn)+Δtf(tn,xn).
We say that the Euler method steps forward in time using a time-step Δt, starting from the initial value x0=x(0). The local error of the Euler Method is O(Δt2). The global error, however, incurred when integrating to a time T, is a factor of 1/Δt larger and is given by O(Δt). It is therefore customary to call the Euler Method a first-order method.
Modified Euler method
This method is a so-called predictor-corrector method. It is also the first of what we will see are Runge-Kutta methods. As before, we want to solve (4.1). The idea is to average the value of ˙x at the beginning and end of the time step. That is, we would like to modify the Euler method and write
xn+1=xn+12Δt(f(tn,xn)+f(tn+Δt,xn+1)).
The obvious problem with this formula is that the unknown value xn+1 appears on the right-hand-side. We can, however, estimate this value, in what is called the predictor step. For the predictor step, we use the Euler method to find
xpn+1=xn+Δtf(tn,xn).
The corrector step then becomes
xn+1=xn+12Δt(f(tn,xn)+f(tn+Δt,xpn+1)).
The Modified Euler Method can be rewritten in the following form that we will later identify as a Runge-Kutta method
k1=Δtf(tn,xn),k2=Δtf(tn+Δt,xn+k1),xn+1=xn+12(k1+k2).
Second-order Runge-Kutta methods
We now derive the complete family of second-order Runge-Kutta methods. Higherorder methods can be similarly derived, but require substantially more algebra.
We again consider the differential equation given by (4.1). A general second-order Runge-Kutta method may be written in the form
k1=Δtf(tn,xn),k2=Δtf(tn+αΔt,xn+βk1),xn+1=xn+ak1+bk2,
with α,β,a and b constants that define the particular second-order Runge-Kutta method. These constants are to be constrained by setting the local error of the second-order Runge-Kutta method to be O(Δt3). Intuitively, we might guess that two of the constraints will be a+b=1 and α=β.
We compute the Taylor series of xn+1 directly, and from the Runge-Kutta method, and require them to be the same to order Δt2. First, we compute the Taylor series of xn+1. We have
xn+1=x(tn+Δt)=x(tn)+Δt˙x(tn)+12(Δt)2¨x(tn)+O(Δt3).
Now,
˙x(tn)=f(tn,xn).
The second derivative is more complicated and requires partial derivatives. We have
¨x(tn)=ddtf(t,x(t))]t=tn=ft(tn,xn)+˙x(tn)fx(tn,xn)=ft(tn,xn)+f(tn,xn)fx(tn,xn).
Therefore,
xn+1=xn+Δtf(tn,xn)+12(Δt)2(ft(tn,xn)+f(tn,xn)fx(tn,xn))+O(Δt3).
Second, we compute xn+1 from the Runge-Kutta method given by (4.7). Combining (4.7) into a single expression, we have
xn+1=xn+aΔtf(tn,xn)
+bΔtf(tn+αΔt,xn+βΔtf(tn,xn))+O(Δt3).
We Taylor series expand using
f(tn+αΔt,xn+βΔtf(tn,xn))=f(tn,xn)+αΔtft(tn,xn)+βΔtf(tn,xn)fx(tn,xn)+O(Δt2).
The Runge-Kutta formula is therefore
xn+1=xn+(a+b)Δtf(tn,xn)+(Δt)2(αbft(tn,xn)+βbf(tn,xn)fx(tn,xn))+O(Δt3).
Comparing (4.9) and (4.10), we find
a+b=1,αb=1/2,βb=1/2.
Since there are only three equations for four parameters, there exists a family of secondorder Runge-Kutta methods.
The modified Euler method given by (4.6) corresponds to α=β=1 and a= b=1/2. Another second-order Runge-Kutta method, called the midpoint method, corresponds to α=β=1/2,a=0 and b=1. This method is written as
k1=Δtf(tn,xn),k2=Δtf(tn+12Δt,xn+12k1),xn+1=xn+k2.
Higher-order Runge-Kutta methods
The general second-order Runge-Kutta method was given by (4.7). The general form of the third-order method is given by
k1=Δtf(tn,xn),k2=Δtf(tn+αΔt,xn+βk1),k3=Δtf(tn+γΔt,xn+δk1+ϵk2),xn+1=xn+ak1+bk2+ck3.
The following constraints on the constants can be guessed: α=β,γ=δ+ϵ, and a+b+c=1. Remaining constraints need to be derived.
The fourth-order method has a k1,k2,k3 and k4. The fifth-order method requires at least to k6. The table below gives the minimum order of the method and the number of stages required.
order | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|
minimum # stages | 2 | 3 | 4 | 6 | 7 | 9 | 11 |
Because of the jump in the number of stages required between the fourth- and fifthorder methods, the fourth-order Runge-Kutta method has some appeal. The general fourth-order method with four stages has 13 constants and 11 constraints. A particularly simple fourth-order method that has been widely used in the past by physicists is given by
k1=Δtf(tn,xn)k2=Δtf(tn+12Δt,xn+12k1)k3=Δtf(tn+12Δt,xn+12k2)k4=Δtf(tn+Δt,xn+k3)xn+1=xn+16(k1+2k2+2k3+k4).
Adaptive Runge-Kutta Methods
As in adaptive integration, it is useful to devise an ode integrator that automatically finds the appropriate Δt. The Dormand-Prince Method, which is implemented in MATLAB’s ode 45.m, finds the appropriate step size by comparing the results of a fifth-order and fourth-order method. It requires six function evaluations per time step, and constructs both a fifth-order and a fourth-order method from the same function evaluations.
Suppose the fifth-order method finds xn+1 with local error O(Δt6), and the fourthorder method finds x′n+1 with local error O(Δt5). Let ε be the desired error tolerance of the method, and let e be the actual error. We can estimate e from the difference between the fifth- and fourth-order methods; that is,
e=|xn+1−x′n+1|.
Now e is of O(Δt5), where Δt is the step size taken. Let Δτ be the estimated step size required to get the desired error ε. Then we have
e/ε=(Δt)5/(Δτ)5,
or solving for Δτ,
Δτ=Δt(εe)1/5
On the one hand, if the actual error is less that the desired error, e<ε, then we accept xn+1 and do the next time step using the larger value of Δτ. On the other hand, if the actual error is greater than the desired error, e>ε, then we reject the integration step and redo the time step using the smaller value of Δτ. In practice, one usually increases the time step slightly less and decreases the time step slightly more to prevent the wastefulness of too many failed time steps.
System of differential equations
Our numerical methods can be easily adapted to solve higher-order differential equations, or equivalently, a system of differential equations. First, we show how a secondorder differential equation can be reduced to two first-order equations. Consider
¨x=f(t,x,˙x).
This second-order equation can be rewritten as two first-order equations by defining u=˙x. We then have the system
˙x=u,˙u=f(t,x,u).
This trick also works for higher-order equations. For example, the third-order equation
⃛x=f(t,x,˙x,¨x),
can be written as
˙x=u,˙u=v,˙v=f(t,x,u,v).
We can generalize the Runge-Kutta method to solve a system of differential equations. As an example, consider the following system of two odes,
˙x=f(t,x,y),˙y=g(t,x,y),
with the initial conditions x(0)=x0 and y(0)=y0. The generalization of the commonly used fourth-order Runge-Kutta method would be
k1=Δtf(tn,xn,yn),l1=Δtg(tn,xn,yn),k2=Δtf(tn+12Δt,xn+12k1,yn+12l1),l2=Δtg(tn+12Δt,xn+12k1,yn+12l1),k3=Δtf(tn+12Δt,xn+12k2,yn+12l2),l3=Δtg(tn+12Δt,xn+12k2,yn+12l2),k4=Δtf(tn+Δt,xn+k3,yn+l3),l4=Δtg(tn+Δt,xn+k3,yn+l3),xn+1=xn+16(k1+2k2+2k3+k4),yn+1=yn+16(l1+2l2+2l3+l4).
Boundary value problems
Shooting method
We consider the general ode of the form
d2ydx2=f(x,y,dy/dx),
with two-point boundary conditions y(0)=A and y(1)=B. We will first formulate the ode as an initial value problem. We have
dydx=z,dzdx=f(x,y,z).
The initial condition y(0)=A is known, but the second initial condition z(0)=b is unknown. Our goal is to determine b such that y(1)=B.
In fact, this is a root-finding problem for an appropriately defined function. We define the function F=F(b) such that
F(b)=y(1)−B,
where y(1) is the numerical value obtained from integrating the coupled first-order differential equations with y(0)=A and z(0)=b. Our root-finding routine will want to solve F(b)=0. (The method is called shooting because the slope of the solution curve for y=y(x) at x=0 is given by b, and the solution hits the value y(1) at x=1. This looks like pointing a gun and trying to shoot the target, which is B.)
To determine the value of b that solves F(b)=0, we iterate using the secant method, given by
bn+1=bn−F(bn)bn−bn−1F(bn)−F(bn−1).
We need to start with two initial guesses for b, solving the ode for the two corresponding values of y(1). Then the secant method will give us the next value of b to try, and we iterate until |y(1)−B|< tol, where tol is some specified tolerance for the error.