Skip to main content
Mathematics LibreTexts

1.2: Forward Euler method

  • Page ID
    108069
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Forward Euler algorithm

    Now we examine our first ODE solver: the Forward Euler method. Here is the problem and the goal: Given a scalar, first-order ODE, \[\tag{eq:2.1} \frac{d y}{d t} = f(t, y)\] and an initial condition \(y(t=0) = y_0\), find how the function \(y(t)\) evolves for all times \(t > 0\). In particular, write down an algorithm which may be executed by a computer to find the evolution of \(y(t)\) for all times.

    To derive the algorithm, first replace the exact equation with an approximation based on the forward difference derivative to get \[\tag{eq:2.2} \frac{y(t + h) - y(t)}{h} \approx f(t, y)\] Now discretize the equation. That means we replace our function \(y(t)\) defined on continuous \(t\) with a sampled function \(y_n\) defined on discrete times \(t_n\). That is, \(y_n = y(t_n)\). We also imagine the time step between samples is small, \(h = t_{n+1} - t_n\). In this case, [eq:2.2] becomes \[\tag{eq:2.3} \frac{y_{n+1} - y_n}{h} \approx f(t_n, y_n)\] Moving forward, we will replace the \(\approx\) sign with \(=\) for convenience, and keep in mind that behind our expression lurks an approximation error which grows with increasing \(h\). Next, imagine we are sitting at the present time point \(t_n\), and rewrite [eq:2.3] to move the future to the LHS and the present to the RHS. We get \[\tag{eq:2.4} y_{n+1} = y_n + h f(t_n, y_n)\] Now observe: This expression says that if we know the values of \(t_n\) and \(y_n\) at the present, then to get the future value \(y_{n+1}\) we just perform the computation on the RHS. That is, equation [eq:2.4] describes a way to step forward in time. We start at time \(t=0\) with initial condition \(y = y_0\) and use equation [eq:2.4] to step to \(y_1\), then use that result to step to \(y_2\), and then \(y_3\), and so on. By iterating [eq:2.4] we generate a vector of \(y_n\) values which constitute the sampled version of the function \(y(t)\).

    Algorithm 1 shows pseudocode implementing the forward Euler method. A graphical depiction

    __________________________________________________________________________________________________________________________________

    Algorithm 1 Forward Euler Method

    __________________________________________________________________________________________________________________________________

    Inputs: initial condition \(y_0\), end time \(t_{end}\).

    Initialize: \(t_0 = 0, y_0\).

    for \(n = 0: t_{end}/h\) do

    \(t_n = n h\)

    \(y_{n+1} \leftarrow y_n + h f(t_n, y_n)\)

    end for 

    return \(y\) vector.

    __________________________________________________________________________________________________________________________________

    of the algorithm is shown in 2.1. Here are some things to notice about forward Euler:

    Screen Shot 2022-07-22 at 1.40.46 PM.png

    Figure 2.1: The forward Euler algorithm. At each time step the algorithm computes the slope of \(f(t, y)\), then moves forward by step \(h\) to find the next value of \(y\). The slope to use is the slope of the curve at the beginning of the interval. This figure shows a problem with this algorithm: the actual solution may curve away from the computed solution since forward Euler uses the old slope at \(t_n\) to compute the step, and that slope may not point the method to the correct \(y_{n+1}\) at time \(t_{n+1}\).

    Here are some things to notice about forward Euler:

    • Refering to 2.1, you can see we compute the solution relies on computing the slope of \(y\) using \(f(t,y)\) (i.e. the derivative) at each point \(t_n\), then using the slope to extrapolate \(y\) forward one step. In the figure the thick blue line denotes the slope at each \(t_n\). Note that the slope used is the slope at the beginning of the step interval – hence the name "forward" Euler. You can see how each forward step is determined by the slope of the blue line.
    • Forward Euler is an instance of a so-called "explicit method". Explicit means all calculations required to take the step are present on the RHS of [eq:2.4].
    • shows clearly a problem with forward Euler: The actual solution can curve away from the forward Euler solution, causing an error to build up in the forward Euler solution. The error is clearly dependent upon the size of \(h\): Larger \(h\) will lead to larger error, while smaller \(h\) leads to smaller error. The difficulty with smaller \(h\) is that smaller step sizes will require more steps – and longer computation time – to reach the same end time, \(t_{end}\).

    Besides understanding the math behind the method, there are important implementation features to notice. The code implementing forward Euler is broken into three parts:

    1. A top level main program called "test forward euler". This is the program run by the user. It sets the model parameters used and invokes the solver itself. It then makes plots of the result.
    2. The solver implementation called "forward euler". This function is written in a generic way, so it does not set any details of either the parameters of the particular ODE under consideration, nor algorithm features like the step size. The goal is to have a generic solver which may be used on any ODE without modification.
    3. An implementation of the ODE equations contained in a file called "f". This corresponds to the function \(f(t_n, y_n)\) in [eq:2.4]. Model parameters are passed directly from the top level to "f" as global variables.

    A block diagram showing this architecture is presented in 2.2.

    Screen Shot 2022-07-22 at 3.09.58 PM.png

    Figure 2.2: The three-layer architecture used in the example Matlab programs implementing forward Euler. The user runs the main program called "test forward euler". This in turn calls the solver "forward euler", which in turn calls the function "f" which contains the actual ODE. A similar design pattern is used for almost all solvers presented in this booklet.

    The architecture in 2.2 is a simple "software pattern", meaning that it is a good way to partition the code into different pieces, each of which perform a specific function. Almost all ODE solver examples I present in this booklet are architected using this three-layer pattern. Real-world ODE solvers like Sundials or DifferentialEquations.jl are also frequently partitioned into different pieces, although their architectures are generally more complicated and have more pieces. The additional pieces might handle e.g initialization of the solver, management of memory, error handling, sensitivity analysis, and so on. The point is that it is helpful to partition your program into different pieces which handle logically distinct functions. Indeed, this is one of the most important things you learn to do when becoming a software engineer.

    Example: the exponential growth ODE

    As a first example of our simple solver, we’ll apply forward Euler to integrating the exponential growth ODE, \[\tag{eq:2.5} \frac{d y}{d t} = \alpha y\] We already know the solution to this equation – it is \(y(t) = y(0) e^{\alpha t}\). For \(\alpha < 0\) the solution decays to zero for \(t \rightarrow \infty\), while for \(\alpha > 0\) the solution escapes to infinity for \(t \rightarrow \infty\).

    Now use the prescription to discretize the equation for forward Euler and get \[\tag{eq:2.6} y_{n_+1} = y_n + h\alpha y_n\] The result computed using forward Euler is shown in 2.3 for varying values of \(\alpha\).

    Screen Shot 2022-07-22 at 3.12.24 PM.png

    Figure 2.3: The solution to [eq:2.5] computed using the forward Euler algorithm for different values of \(\alpha\). The computed function \(y_n\) is plotted using open circles and the analytic solution is plotted using red lines. The stepsize used was \(h=0.1\). Note that the computed and analytic solutions don’t exactly match, particularly the case where \(\alpha = 0.3\).

    The figure shows the computed solution doesn’t exactly track the analytic solution. What is wrong? The answer has to do with the errors incurred by using the forward difference formula to approximate the derivative. Recall that the forward difference expression [eq:1.8] is only true in the limit where the stepsize goes to zero, \(h \rightarrow 0\). For non-zero \(h\), the forward difference approximation is inaccurate, and per the derivation presented in 1.3.1 the magnitude of the error scales as \(O(h)\). We can quantify this by examining the RMS error of the computed solution. The RMS error is computed by comparing the root-mean-square difference between the computed and the analytic solution as follows: \[\tag{eq:2.7} e = \sqrt{\frac{1}{N} \sum_{i=1}^{N} \left( y_t(i) - y_c(i) \right)^2 }\] Here, \(y_t\) is the "mathematically true" (analytic) solution, \(y_c\) is the solution computed using forward Euler, \(N\) is the number of sample points in the solution, and \(e\) is the RMS error. This is also called the "global error" in some textbooks. We expect that the RMS error will increase with increasing \(h\) – larger stepsizes accumulate larger errors. The question is, how does the error behave with \(h\)? A plot of RMS error vs. \(h\) is shown in fig. 2.4.

    On the log-log plot the RMS error increases with \(h\) by following a straight line. A useful feature of log-log plots is that it clearly reveals power-law relationships. Specifically, if a function obeys a power-law \(y \sim x^p\) then the slope of the \(y\) vs. \(x\) line on a log-log plot is the power \(p\). This can be understood by taking the log of both sides of the power-law (which is what plotting on a log-log scale does) and noting that \(p\) becomes the slope of the line. The slope of the line in fig. 2.4 is exactly

    Screen Shot 2022-07-23 at 5.55.43 PM.png

    Figure 2.4: The error incurred by the forward Euler method vs. stepsize \(h\) plotted on a log-log scale. Note that the slope of the error points is one. This implies the error \(e \sim O(h)\).

    1 – This means the RMS error incurred by the forward Euler method is proportional to the stepsize

    \(h\) – that is, error \(e \sim O(h)\). This scaling relationship was derived in 1.3.1 when the forward difference expression was derived from the Taylor series – terms of \(O(h)\) and above were discarded to get the forward difference expression, which means that using the approximation carries an \(O(h)\) error penalty.

    Detour: Local truncation error vs. global solution error

    Back in 1.3.1 I showed how the approximations to the derivative were derived from the Taylor series. I also mentioned that by dropping higher-order terms, we were implicitly accepting that our computed solutions could only be approximations to the "mathematically true" solution. That is, the computed solution carried an error, and the major cause of error was due to truncating the Taylor series. This error is sometimes called the "local truncation error" or "LTE" in the ODE literature. We generally characterize the LTE in terms of the stepsize scaling \(O(h^p)\). Generally, the error order \(p\) is the same as the first truncated term in the Taylor series. That is, if the first truncated term is \(h\) then the LTE scales as \(O(h)\).

    Now in the previous section we introduced the RMS error [eq:2.7]. Since the RMS error is a sum over the entire solution, it is a global error – it adds up contributions from all errors present in the entire solution. (It is normalized by \(N\), but nonetheless is sensitive to all errors.) Therefore, this error is sometimes called the "global error" or GE in the ODE literature. Again, the GE is characterized by how it scales with \(h\). All error analyses shown in this booklet are global errors computed using the formula [eq:2.7] and are expressed as scaling plots similar to that shown in fig. 2.4.

    The LTE and the GE are conceptually distinct quantities. Therefore, you might ask "what is the relationship between the two?" That is, do they measure the same thing? The answer is "yes". Although I won’t do it here, it can be proven that the LTE and the GE obey the same scaling relationship for any given method. That is, if the LTE scales as \(O(h^p)\), then the GE also scales as \(O(h^p)\). Accordingly, although I speak loosely about "the error" without quantifying exactly what I mean, the equivalence of the LTE and the GE are sufficient to guarantee the term "the error" refers to something which may be quantified, and is a meaningful metric with which to measure the different ODE solvers presented in this booklet.

    Example: a driven system

    Now back to solving ODEs. A slightly more complicated ODE is this linear ODE with a driving term. \[\tag{eq:2.8} \frac{d y}{d t} = A \sin(\omega t) - \alpha y\] Since the equation is linear, the general solution is the sum of two pieces: a homogeneous term and an inhomogeneous term (sometimes called the "particular solution"). We’ll consider the case of positive \(\alpha\). That is, for \(t \rightarrow \infty\) the homogeneous solution will decay away as \(e^{-\alpha t}\) so the long-term behavior of the system is determined by the forcing term \(A \sin(\omega t)\). The analytic solution to the equation is may be found after some algebra, \[\tag{eq:2.9} y(t) = C e^{-\alpha t} + B \cos(\omega t + \phi)\] We adjust the constants \(B\), \(C\), and \(\phi\) to match the initial conditions as well as ensure the particular solution obeys [eq:2.8] for \(t \rightarrow \infty\). To make the plot shown below in fig. 2.5 we choose \(y(0) = 1\). With this initial condition, an exercise in easy algebra will show the fitting constants are \[\begin{aligned} \nonumber \phi &= \arctan \left( \frac{\alpha}{\omega} \right) \\ \nonumber B &= \frac{-A}{\omega \cos(\phi) + \alpha \sin(\phi)} \\ \nonumber C &= 1 - B \cos(\phi)\end{aligned}\]

    Discretized for forward Euler, the iteration is \[\tag{eq:2.10} y_{n_+1} = y_n + h(A \sin(\omega t_n) - \alpha y_n)\] shows the computed solution to [eq:2.8] for 15 seconds assuming \(A = 1.5\), \(\omega = 5\) and \(\alpha = 0.5\). The solution clearly shows an initial transient which decays away, leaving a solution which behaves like a sine wave as \(t \rightarrow \infty\).

    Screen Shot 2022-07-23 at 5.57.21 PM.png

    Figure 2.5: The solution to [eq:ForwardEulerDriven] computed using the forward Euler algorithm. The computed function \(y_n\) is plotted by the blue circles and the analytic solution is plotted using red lines. Although the two solutions are close, they don’t exactly match, and the forward Euler solution seems to diverge from the analytic solution with increasing \(t\).

    Example: the logistic equation

    Here is an example of a nonlinear ODE: the logistic equation. This ODE can be regarded as a very simple model of population dynamics. Here’s the ODE: \[\tag{eq:2.11} \frac{d y}{d t} = \left( 1 - \frac{y}{Y_m} \right) y\] If you think of \(y\) as the population of some animal species, when \(y \ll Y_m\), then the population can grow exponentially. But as the population \(y \rightarrow Y_m\), the growth rate of the population saturates. Think of rabbits living in a carrot field – when there are very few rabbits (and no preditors) then the rabbits have lots of carrots to eat and they may reproduce freely, leading to exponential growth. However, if too many rabbits live in the field, then they eat most of the carrots and can’t reproduce freely anymore since food is scarce. Consequently, the rabbit population tends towards a fixed limit given by \(y = Y_m\).

    This equation has the following analytic solution: \[\tag{eq:2.12} y(t) = \frac{y_0 e^t}{ \left( 1 + \frac{y_0}{Y_m} e^t \right) }\] where \(y_0\) is used to match the initial condition. Inspection shows \(y_0 = y(t=0)\). Also note that when \(y = Y_m\) the growth rate (derivative) goes to zero, consistent with the intuition that the rabbit population should saturate when the rabbits eat most of the carrots. Just enough carrots remain to carry the rabbit population, but the population can’t grow any more.

    Discretized for forward Euler, the iteration to implement for the logistic equation is \[\tag{eq:2.13} y_{n_+1} = y_n + h \left( 1 - \frac{y_n}{Y_m} \right) y_n\] This iteration was implemented in Matlab and then run for three different values of \(Y_m\). The results are shown in fig. 2.6. Forward Euler reproduces the saturation behavior of the logistic equation quite well – after around \(t = 10\) the forward Euler solution matches the analytic solution. However, forward Euler does a worse job reproducing the period of exponential growth around \(t = 5\) – forward Euler lags the analytic solution. This behavior is similar to that observed in the exponential growth ODE shown in fig. 2.3.

    Screen Shot 2022-07-23 at 5.57.51 PM.png

    Figure 2.6: The solution to [eq:ForwardEulerLogistic] computed using the forward Euler algorithm for three different \(Y_m\) values. The computed function \(y_n\) is plotted using open circles and the analytic solution is plotted using red lines. Note that the computed solution lags (is behind) the analytic solution – forward Euler’s solution carries an error. One can imagine the rabbit population starting from a low base and growing until it hits a saturation point. The saturation point is determined by the value of \(Y_m\).

    Extending Forward Euler to higher order

    So far, we have dealt with scalar, first-order ODEs like that shown in Equation [eq:2.1]. Many ODEs encountered in practice are higher order. For example, consider the most famous second order ODE of all: Newton’s second law: \(F = ma\). It says that imposing a force on a body will cause it to accelerate in the direction of the applied force. Since acceleration is the second derivative of the body’s position, \(a = d^2 y/ d t^2\), this implies we can write Newton’s second law as an ODE: \[\tag{eq:2.14} \frac{d^2 y}{d t^2} = F(t, y', y)\] Given the importance of Newton’s law, we want to be able to solve ODEs like [eq:2.14] numerically. How to extend the forward Euler method to second (and higher) order ODEs?

    The way to solve [eq:2.14] numerically is to rearrange it to become two first order ODEs as follows:

    1. Define two variables, \(y_1 = dy/dt\) and \(y_2 = y\).
    2. Now note that [eq:2.14] may be written as \[\nonumber \frac{d y_1}{d t} = \frac{d^2 y}{d t^2} = F(t, y', y)\]
    3. Next, by definition we have \[\nonumber \frac{d y_2}{d t} = \frac{d y}{d t} = y_1\]
    4. Therefore, we have broken [eq:2.14] up into two coupled, first-order ODEs, \[\begin{aligned} \nonumber \frac{d y_1}{d t} &= F(t, y_1, y_2) \\ \nonumber \frac{d y_2}{d t} &= y_1 \end{aligned}\]
    5. This system of two first order ODEs may be solved using forward Euler using the same methods as in 2.1.

    This technique of breaking a second order ODE into two first order ODEs may be easily generalized. As a rule, if you start with an \(n\)th order ODE you may break it down into \(n\) first order ODEs. Then the system of first order ODEs may be solved using forward Euler or any of the more advanced methods we will encounter later.

    It’s not enough to simply specify the equations to solve. We also need to specify intial conditions (ICs) for this system – one IC for each equation. In general, an \(n\)th order ODE will be accompanied by \(n\) initial conditions, usually one for the function \(y(t=0)\), one for the first derivative \(y'(t=0)\), one for the second derivative \(y''(t=0)\), and so on. Therefore, a second order ODE such as [eq:2.14] will require two ICs. Using the definitions of \(y_1\) and \(y_2\) above, it’s easy to map the ICs for the second order equation onto the ICs for the system as \[\begin{aligned} \nonumber y_1(0) &= y'(0) \\ \nonumber y_2(0) &= y(0)\end{aligned}\]

    Example: the simple harmonic oscillator

    To make these ideas concrete we will examine the so-called simple harmonic oscillator (SHO). This is a simple model of oscillatory motion beloved by physicists, and encountered in many real-world applications. One situation modeled by the SHO is a mass sliding on a (frictionless) floor, and tethered to a wall by a spring. See fig. 2.7. The force impressed by the spring on the mass is modeled using Hooke’s law, which says the spring tries to restore the mass to an equilibrium position with a force varying linearly with position. Taking \(y\) as the position of the mass, Hooke’s law says \(F = -k (y-y_r)\), where \(k\) is a constant measuring the stiffness of the spring, and \(y_r\) is the rest position of the mass. From now on out we will set \(y_r = 0\) since it is a constant and doesn’t affect the math we want to explore.

    Combining Hooke’s law and Newton’s second law, we can write down a second order ODE describing the motion of the mass under the influence of the spring, \[\tag{eq:2.15} \frac{d^2 y}{d t^2} = -\frac{k}{m} y\] You probably already know the solution(s) to this equation. They are sines and cosines, \[\tag{eq:2.16} y(t) = A \cos(\omega t) + B \sin(\omega t)\]

    Screen Shot 2022-07-23 at 5.58.21 PM.png

    Figure 2.7: The simple harmonic oscillator (SHO). The mass \(m\) slides frictionlessly on the floor. It is attached to a wall through a spring. The spring constant (measure of spring stiffness) is \(k\). When the mass is displaced from its rest position \(y_r\) the spring tends to push or pull the mass back to the rest position. However, the mass overshoots the rest position due to Newton’s second law. Therefore, the spring will execute a sinusoidal oscillation around \(y_r\), and if the system is lossless the oscillation will never stop. The oscillatory motion is governed by [eq:SHO], which is known as the simple harmonic oscillator equation.

    where we have defined \(\omega = \sqrt{k/m}\) for convenience. \(\omega\) is generally referred to as the oscillation frequency. The coefficients \(A\) and \(B\) are determined by the initial conditions.

    How to solve [eq:2.15] numerically? We’ll use our prescription from section 2.6 and break [eq:2.15] into two first order ODEs. We define \(u = dy/dt\) and \(v = y\). (I use \(u\) and \(v\) instead of \(y_1\) and \(y_2\) for notational convenience in what comes next.) Now we can decompose [eq:2.15] into \[\tag{2.17} \begin{aligned} \frac{d u}{d t} &= -\frac{k}{m} v \\ \frac{d v}{d t} &= u \end{aligned}\] Now discretize using the forward difference approximation to the first derivative: \[\begin{aligned} \nonumber \frac{u_{n+1} - u_{n}}{h} &= -\frac{k}{m} v_n \\ \nonumber \frac{v_{n+1} - v_{n}}{h} &= u_n\end{aligned}\] Next rearrange these equations to place the future on the LHS and the present on the RHS: \[\begin{aligned} u_{n+1} &= u_n -h \frac{k}{m} v_n \\ v_{n+1} &= v_n + h u_n \end{aligned} \tag{eq:2.18}\] Now we have a pair of equations which we can step forward in time! We start with known initial conditions \(u_0\) and \(v_0\) and use the system to compute the next values in time \(u_1\) and \(v_1\), then use those to get the next values, and so on.

    This algorithm is easy to implement using e.g. Matlab. The results of running a Matlab program implementing the iteration [eq:2.18] are shown in fig. 2.8. Results for two different step sizes (\(h = 0.01\) and \(h = .001\)) are shown for comparison. The initial conditions \(u_0 = 1\) and \(v_0 = 0\) were used in both cases. These ICs correspond to an oscillating mass whose starting position is 1 and whose starting velocity is 0 – similar to pulling the mass by hand to position 1, holding it there for a moment, then releasing it.

    Screen Shot 2022-07-23 at 5.59.23 PM.png

    Figure 2.8: The simple harmonic oscillator (SHO) solved using forward Euler. Note that the sinusoidal amplitude increases exponentially with time. This is contrary to the known solution involving pure sins and cosines. Why does this happen? This effect has to do with the concept of stability discussed later in 2.9.

    Things to note in the figure:

    • Both plots show the mass oscillates back and forth in time. This is to be expected, since the solutions to [eq:2.15] are sines and cosines.
    • However, we see that both solutions grow in time. The \(h = 0.01\) solution grows faster than the \(h = .001\) one, but close inspection of the plot shows they both grow. This is unexpected, and is incorrect – the ODE [eq:2.15] has sin and cos solutions which don’t grow. What is wrong? The answer to this question involves studying the stability of the forward Euler method.

    Propagator matrix for SHO

    Before we talk about stability, it is convenient to recast the iteration [eq:2.18] into a different format. Because [eq:2.15] is linear, we can take the forward Euler system one step further. Note that the following doesn’t work with all ODEs – only with linear ODEs! With a little rearranging we can write [eq:2.18] as a matrix-vector system, \[\begin{aligned} \begin{pmatrix} u_{n+1} \\ v_{n+1} \end{pmatrix} = \begin{pmatrix} 1 & - h k / m \\ h & 1 \end{pmatrix} \begin{pmatrix} u_{n} \\ v_{n} \end{pmatrix}\end{aligned} \tag{eq:2.19}\] or, written in matrix-vector form, \[\label{eq:SHOMatVecForm} \nonumber {y_{n+1}} = {A} {y_n}\] where \[\nonumber {y_n} = \begin{pmatrix} u_{n} \\ v_{n} \end{pmatrix}\] and \({A}\) is the matrix shown in [eq:2.19]. \({A}\) is commonly called the "propagator matrix" since repeated multiplication by \({A}\) propagates (steps) the solution \({y_n}\) forward in time.

    Pseudocode implementing the forward Euler algorithm to solve the simple harmonic oscillator is shown in [alg:2].

    __________________________________________________________________________________________________________________________________

    Algorithm 2 Forward Euler method for SHO

    __________________________________________________________________________________________________________________________________

    Inputs: initial condition \(y_0 = [u_0,v_0]^T\), end time \(t_{end}\).

    Initialize: \(t_0 = 0, y_0\).

    for \(n=0: t_{end}/h\) do

    \({y_{n+1}} \leftarrow {A} {y_n}\)

    end for

    return \({y_n}\).

    __________________________________________________________________________________________________________________________________

    The results of a run using this propagator method are shown in 2.9. The results are the same as those obtained in section 2.7

    Screen Shot 2022-07-23 at 5.59.54 PM.png

    Figure 2.9: The simple harmonic oscillator (SHO) integrated using the propagator iteration. Use of the propagator matrix only works when the ODE under consideration is linear and each step may be turned into a matrix-vector multiplication.

    Stability

    In deriving the expressions for finite difference approximations we found that the process of discretization produced a difference (error) between the exact (analytical) derivative and the finite difference approximation. Now we find a second pitfall in numerically solving ODEs: Growning numerical solutions when we know the exact (analytic) solution should not grow in time. What is happening?

    Stability of forward Euler for the exponential growth ODE

    To understand the observed growth consider the exponential growth ODE: \[\tag{eq:2.20} \frac{dy}{dt} = a y\] Note that this ODE is linear. We know this equation has an exact, mathematically-true solution \(y_t(t) = y_0 e^{a t}\), where we denote the exact, mathematically-true solution as \(y_t\). Now ask the question: What happens if the exact solution is perturbed by some error \(e\)? That is, take \(y = y_t(t) + e(t)\). In this case linearity says we can write \[\nonumber \frac{dy_t}{dt} + \frac{de}{dt} = a (y_t + e)\] This equation separates into two pieces, \[\nonumber \frac{dy_t}{dt} = a y_t\] which is satisfied identically because \(y_t\) is the exact solution by definition, and \[\nonumber \frac{de}{dt} = a e\] Which governs the behavior of the error term \(e\). Now discretize this term using the forward Euler method to get \[\tag{eq:2.21} e_{n+1} = e_n + h a e_n = (1 + a h)e_n\] where \(e_n\) is the error at step \(n\). Assuming we start with a small perturbation \(e_0\) at time \(t=0\), and just watch the evolution of this one perturbation in time, we find [eq:2.21] behaves as \[\tag{eq:2.22} e_{n} = (1 + a h)^n e_0\] Clearly, the perturbation grows exponentially as \(n \rightarrow \infty\) if \(\lvert 1 + a h \rvert > 1\) but decays to zero as long as \[\tag{eq:2.23} \lvert 1 + a h \rvert < 1\] Equation (2.23) is the forward Euler stability condition for the exponential growth ODE. A solver applied to an ODE is stable if an initial error or perturbation either maintains the same magnitude or dies out in time. But if an initial error grows exponentially, then the solver is unstable. Note that the property of stability depends upon both the solver and the ODE itself – a solver which is stable for one ODE might be unstable for a different one, and different solvers may be stable or unstable for the same ODE.

    So now the question becomes, when is forward Euler stable for the simple ODE [eq:2.20]? For forward Euler to make sense, the step \(h\) must be a small, positive number. However, we have placed no constraints on \(a\) – it can be positive, negative, or even complex. Therefore, to understand when [eq:2.20] is stable it makes sense to plot the region of stability in the complex plane of \(a h\). This is shown in fig. 2.10. The region of stability is the grey circle to the left of the origin. Values inside the circle represent \(a h\) values which do not blow up – they are stable because they satisfy the stability condition \(\lvert 1 + a h \rvert < 1\).

    Screen Shot 2022-07-23 at 6.00.27 PM.png

    Figure 2.10: Forward Euler’s stability region for the simple linear ODE [eq:2.20]. The stability region for this system is a circle of radius 1 in the complex plane centered at \(a h = -1\).

    Stability of forward Euler for the simple harmonic oscillator

    Now we consider the stability of forward Euler applied to the simple harmonic oscillator. Recall the forward Euler iteration for this equation, \[\begin{aligned} \begin{pmatrix} u_{n+1} \\ v_{n+1} \end{pmatrix} = \begin{pmatrix} 1 & -h k / m \\ h & 1 \end{pmatrix} \begin{pmatrix} u_{n} \\ v_{n} \end{pmatrix}\end{aligned} \tag{eq:2.24}\] We know the "mathematically-true" solution to the SHO is composed of sines and cosines, or equivalently of complex exponentials, \(e^{i \omega t}\). We’ll choose one solution, \[\begin{aligned} \nonumber \label{eq:SHOMatVecSystem1} \begin{pmatrix} u_{n} \\ v_{n} \end{pmatrix} = \begin{pmatrix} e^{i \omega t} \\ i \omega e^{i \omega t} \end{pmatrix}\end{aligned}\] and examine its time evolution. Instead of considering an additive perturbation, we instead consider tracking errors using a multiplicative "error factor" \(e_n\). That means we assume the solution \[\begin{aligned} \begin{pmatrix} u_{n} \\ v_{n} \end{pmatrix} = e_n \begin{pmatrix} e^{i \omega t} \\ i \omega e^{i \omega t} \end{pmatrix}\end{aligned} \tag{eq:2.25}\] and insert it into [eq:2.24] and see how \(e_n\) behaves in time. Clearly, if \(e_n = 1\) for all time, then [eq:2.25] solves [eq:2.24] exactly. But if \(e_n \ne 1\) then the forward Euler iteration introduces errors to the solution.

    Inserting [eq:2.25] into [eq:2.24] we get \[\nonumber e_{n+1} \begin{pmatrix} e^{i \omega (t+h)} \\ i \omega e^{i \omega (t+h)} \end{pmatrix} = e_n \begin{pmatrix} 1 & -\frac{h k}{m} \\ h & 1 \end{pmatrix} \begin{pmatrix} e^{i \omega t} \\ i \omega e^{i \omega t} \end{pmatrix}\] Now divide out by the common factor \(e^{i \omega t}\) and define the so-called "growth factor", \(g_n = e_{n+1}/e_n\) to get \[\nonumber g_n e^{i \omega h} \begin{pmatrix} 1 \\ i \omega \end{pmatrix} = \begin{pmatrix} 1 & -\frac{h k}{m} \\ h & 1 \end{pmatrix} \begin{pmatrix} 1 \\ i \omega \end{pmatrix}\] Now we see that this expression has the form of an eigenvalue problem, \(\lambda {x} = {A} {x}\), where the scalar \(g_n e^{i \omega h}\) plays the role of the eigenvalue. We can solve for \(g_n e^{i \omega h}\) the usual way by subtracting the LHS from the RHS, \[\nonumber \begin{pmatrix} 1 - g_n e^{i \omega h} & -\frac{h k}{m} \\ h & 1 - g_n e^{i \omega h} \end{pmatrix} \begin{pmatrix} 1 \\ i \omega \end{pmatrix} = 0\] and observing that for this to hold, we need values of \(g_n e^{i \omega h}\) which make the matrix singular. Therefore, we require \[\nonumber \begin{vmatrix} 1 - g_n e^{i \omega h} & -\frac{h k}{m} \\ h & 1 - g_n e^{i \omega h} \end{vmatrix} = 0\] or \[\nonumber (1 - g_n e^{i \omega h})^2 + h^2 \frac{k}{m} = 0\] Next, again recall the definition of the "natural frequency" of the SHO, \(\omega_0^2 = k/m\). Then rearrange to solve for \(g_n\) and get \[\nonumber (1 - g_n e^{i \omega h})^2 = -h^2 \omega_0^2\] or \[\nonumber g_n e^{i \omega h} = 1 \pm i h \omega_0\] Now we ask, under which conditions does the error factor \(e_n\) remain constant, or at least not grow or shrink? \(e_n\) remains constant as long as \(|g_n| = 1\). That is, we require \[\tag{eq:2.26} |g_n| = \left| 1 \pm i h \omega_0 \right|\] However, this clearly never happens – the RHS of [eq:fwdEulerGrowthFactor] is always greater than one. This is evident by inspecting the RHS in the complex plane as shown in fig. 2.11. Therefore, the solution computed by forward Euler always grows. For increasingly small step size \(h\) the growth factor tends to 1, but never reaches 1 until \(h\) is exactly zero – a situation which makes the forward Euler method useless since you can’t step forward in time if \(h = 0\). This means the forward Euler method is never useful for solving the simple harmonic oscillator problem – we need to find better methods.

    Screen Shot 2022-07-23 at 6.01.59 PM.png

    Figure 2.11: The RHS of [eq:fwdEulerGrowthFactor]. The real part is always 1, while the imaginary part can take any value. However, as long as the imaginary part is not identically zero, then the magnitude of \(1 \pm i h \omega_0\) is always greater than 1. Therefore, the error factor \(e_n\) will always grow, and the SHO is always unstable for any non-zero stepsize \(h\).

    Chapter summary

    Here are the important points made in this chapter:

    • The forward Euler method is derived from the simple forward difference expression for the derivative, \(y' = (y_{n+1} - y_n)/h\).
    • The forward Euler method is an iterative method which starts at an initial point and walks the solution forward using the iteration \(y_{n+1} = y_n + h f(t_n, y_n)\).
    • Since the future is computed directly using values of \(t_n\) and \(y_n\) at the present, forward Euler is an explicit method.
    • The forward Euler method is defined for 1st order ODEs. It works for both one- and many-variable ODEs. To solve an Nth order ODE one breaks the ODE into N first order ODEs.
    • The RMS error expected when using the forward Euler method scales as \(e \sim O(h)\).
    • Solver stability is not guaranteed for all step sizes \(h\). The stability criterion depends upon the exact ODE under consideration, but in general smaller step sizes are required for stability when using explicit methods.

    This page titled 1.2: Forward Euler method is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by Stuart Brorson.

    • Was this article helpful?