Skip to main content
Mathematics LibreTexts

3.1: Euler's Method

  • Page ID
    30704
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    If an initial value problem

    \[\label{eq:3.1.1} y'=f(x,y),\quad y(x_0)=y_0 \]

    cannot be solved analytically, it is necessary to resort to numerical methods to obtain useful approximations to a solution of Equation \ref{eq:3.1.1}. we will consider such methods in this chapter.

    We’re interested in computing approximate values of the solution of Equation \ref{eq:3.1.1} at equally spaced points \(x_0\), \(x_1\), …, \(x_n=b\) in an interval \([x_0,b]\). Thus,

    \[x_i=x_0+ih,\quad i=0,1, \dots,n, \nonumber \]

    where

    \[h={b-x_0\over n}.\nonumber \]

    we will denote the approximate values of the solution at these points by \(y_0\), \(y_1\), …, \(y_n\); thus, \(y_i\) is an approximation to \(y(x_i)\). we will call

    \[e_i=y(x_i)-y_i \nonumber \]

    the error at the \(i\)th step. Because of the initial condition \(y(x_0)=y_0\), we will always have \(e_0=0\). However, in general \(e_i\ne0\) if \(i>0\).

    We encounter two sources of error in applying a numerical method to solve an initial value problem:

    • The formulas defining the method are based on some sort of approximation. Errors due to the inaccuracy of the approximation are called truncation errors.
    • Computers do arithmetic with a fixed number of digits, and therefore make errors in evaluating the formulas defining the numerical methods. Errors due to the computer’s inability to do exact arithmetic are called roundoff errors.

    Since a careful analysis of roundoff error is beyond the scope of this book, we will consider only truncation errors.

    Euler’s Method

    The simplest numerical method for solving Equation \ref{eq:3.1.1} is Euler’s method. This method is so crude that it is seldom used in practice; however, its simplicity makes it useful for illustrative purposes. Euler’s method is based on the assumption that the tangent line to the integral curve of Equation \ref{eq:3.1.1} at \((x_i,y(x_i))\) approximates the integral curve over the interval \([x_i,x_{i+1}]\). Since the slope of the integral curve of Equation \ref{eq:3.1.1} at \((x_i,y(x_i))\) is \(y'(x_i)=f(x_i,y(x_i))\), the equation of the tangent line to the integral curve at \((x_i,y(x_i))\) is

    \[\label{eq:3.1.2} y=y(x_i)+f(x_i,y(x_i))(x-x_i). \]

    Setting \(x=x_{i+1}=x_i+h\) in Equation \ref{eq:3.1.2} yields

    \[\label{eq:3.1.3} y_{i+1}=y(x_i)+hf(x_i,y(x_i)) \]

    as an approximation to \(y(x_{i+1})\). Since \(y(x_0)=y_0\) is known, we can use Equation \ref{eq:3.1.3} with \(i=0\) to compute

    \[y_1=y_0+hf(x_0,y_0).\nonumber \]

    However, setting \(i=1\) in Equation \ref{eq:3.1.3} yields

    \[y_2=y(x_1)+hf(x_1,y(x_1)),\nonumber \]

    which isn’t useful, since we don’t know \(y(x_1)\). Therefore we replace \(y(x_1)\) by its approximate value \(y_1\) and redefine

    \[y_2=y_1+hf(x_1,y_1).\nonumber \]

    Having computed \(y_2\), we can compute

    \[y_3=y_2+hf(x_2,y_2).\nonumber \]

    In general, Euler’s method starts with the known value \(y(x_0)=y_0\) and computes \(y_1\), \(y_2\), …, \(y_n\) successively by with the formula

    \[\label{eq:3.1.4} y_{i+1}=y_i+hf(x_i,y_i),\quad 0\le i\le n-1. \]

    The next example illustrates the computational procedure indicated in Euler’s method.

    Example 3.1.1

    Use Euler’s method with \(h=0.1\) to find approximate values for the solution of the initial value problem

    \[\label{eq:3.1.5} y'+2y=x^3e^{-2x},\quad y(0)=1 \]

    at \(x=0.1,0.2,0.3\).

    Solution

    We rewrite Equation \ref{eq:3.1.5} as

    \[y'=-2y+x^3e^{-2x},\quad y(0)=1, \nonumber \]

    which is of the form Equation \ref{eq:3.1.1}, with

    \[f(x,y)=-2y+x^3e^{-2x}, \, x_0=0, \, \text{and} \, y_0=1. \nonumber \]

    Euler’s method yields

    \[\begin{align*} y_1 &= y_0+hf(x_0,y_0) \\[4pt] &= 1+(0.1)f(0,1)=1+(0.1)(-2)=0.8,\\[4pt] y_2 & = y_1+hf(x_1,y_1)\\[4pt] & = 0.8+(0.1)f(0.1,0.8)=0.8+(0.1)\left(-2(0.8)+(0.1)^3e^{-0.2}\right)= 0.640081873,\\[4pt] y_3 & = y_2+hf(x_2,y_2)\\[4pt] & = 0.640081873+(0.1)\left(-2(0.640081873)+(0.2)^3e^{-0.4}\right)= 0.512601754. \end{align*} \]

    We’ve written the details of these computations to ensure that you understand the procedure. However, in the rest of the examples as well as the exercises in this chapter, we will assume that you can use a programmable calculator or a computer to carry out the necessary computations.

    Examples Illustrating The Error in Euler’s Method

    Example 3.1.2

    Use Euler’s method with step sizes \(h=0.1\), \(h=0.05\), and \(h=0.025\) to find approximate values of the solution of the initial value problem

    \[y'+2y=x^3e^{-2x},\quad y(0)=1\nonumber \]

    at \(x=0\), \(0.1\), \(0.2\), \(0.3\), …, \(1.0\). Compare these approximate values with the values of the exact solution

    \[\label{eq:3.1.6} y={e^{-2x}\over4}(x^4+4), \]

    which can be obtained by the method of Section 2.1. (Verify.)

    Table 3.1.1 shows the values of the exact solution Equation \ref{eq:3.1.6} at the specified points, and the approximate values of the solution at these points obtained by Euler’s method with step sizes \(h=0.1\), \(h=0.05\), and \(h=0.025\). In examining this table, keep in mind that the approximate values in the column corresponding to \(h=0.05\) are actually the results of 20 steps with Euler’s method. We haven’t listed the estimates of the solution obtained for \(x=0.05\), \(0.15\), …, since there’s nothing to compare them with in the column corresponding to \(h=0.1\). Similarly, the approximate values in the column corresponding to \(h=0.025\) are actually the results of 40 steps with Euler’s method.

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) Exact
    0.0 1.000000000 1.000000000 1.000000000 1.000000000
    0.1 0.800000000 0.810005655 0.814518349 0.818751221
    0.2 0.640081873 0.656266437 0.663635953 0.670588174
    0.3 0.512601754 0.532290981 0.541339495 0.549922980
    0.4 0.411563195 0.432887056 0.442774766 0.452204669
    0.5 0.332126261 0.353785015 0.363915597 0.373627557
    0.6 0.270299502 0.291404256 0.301359885 0.310952904
    0.7 0.222745397 0.242707257 0.252202935 0.261398947
    0.8 0.186654593 0.205105754 0.213956311 0.222570721
    0.9 0.159660776 0.176396883 0.184492463 0.192412038
    1.0 0.139778910 0.154715925 0.162003293 0.169169104
    Table 3.1.1 : Numerical solution of \(y'+2y=x^3e^{-2x},\ y(0)=1\), by Euler’s method.

    You can see from Table 3.1.1 that decreasing the step size improves the accuracy of Euler’s method. For example,

    \[y_{exact}(1)-y_{approx}(1)\approx \left\{\begin{array}{l} 0.0293 \text{with} h=0.1,\\[4pt] 0.0144\mbox{ with }h=0.05,\\[4pt] 0.0071\mbox{ with }h=0.025. \end{array}\right.\nonumber \]

    Based on this scanty evidence, you might guess that the error in approximating the exact solution at a fixed value of \(x\) by Euler’s method is roughly halved when the step size is halved. You can find more evidence to support this conjecture by examining Table 3.1.2 , which lists the approximate values of \(y_{exact}-y_{approx}\) at \(x=0.1\), \(0.2\), …, \(1.0\).

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.0.25\)
    0.1 0.0187 0.0087 0.0042
    0.2 0.0305 0.0143 0.0069
    0.3 0.0373 0.0176 0.0085
    0.4 0.0406 0.0193 0.0094
    0.5 0.0415 0.0198 0.0097
    0.6 0.0406 0.0195 0.0095
    0.7 0.0386 0.0186 0.0091
    0.8 0.0359 0.0174 0.0086
    0.9 0.0327 0.0160 0.0079
    1.0 0.0293 0.0144 0.0071
    Table 3.1.2 : Errors in approximate solutions of \(y'+2y=x^3e^{-2x},\ y(0)=1\), obtained by Euler’s method.
    Example 3.1.3

    Tables 3.1.2 and 3.1.4 show analogous results for the nonlinear initial value problem

    \[\label{eq:3.1.7} y'=-2y^2+xy+x^2,\ y(0)=1, \]

    except in this case we cannot solve Equation \ref{eq:3.1.7} exactly. The results in the “Exact” column were obtained by using a more accurate numerical method known as the Runge-Kutta method with a small step size. They are exact to eight decimal places.

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) Exact
    0.0 1.000000000 1.000000000 1.000000000 1.000000000
    0.1 0.800000000 0.821375000 0.829977007 0.837584494
    0.2 0.681000000 0.707795377 0.719226253 0.729641890
    0.3 0.605867800 0.633776590 0.646115227 0.657580377
    0.4 0.559628676 0.587454526 0.600045701 0.611901791
    0.5 0.535376972 0.562906169 0.575556391 0.587575491
    0.6 0.529820120 0.557143535 0.569824171 0.581942225
    0.7 0.541467455 0.568716935 0.581435423 0.593629526
    0.8 0.569732776 0.596951988 0.609684903 0.621907458
    0.9 0.614392311 0.641457729 0.654110862 0.666250842
    1.0 0.675192037 0.701764495 0.714151626 0.726015790
    Table 3.1.3 : Numerical solution of \(y'=-2y^2+xy+x^2,\ y(0)=1\), by Euler’s method.

    Since we think it is important in evaluating the accuracy of the numerical methods that we will be studying in this chapter, we often include a column listing values of the exact solution of the initial value problem, even if the directions in the example or exercise don’t specifically call for it. If quotation marks are included in the heading, the values were obtained by applying the Runge-Kutta method in a way that’s explained in Section 3.3. If quotation marks are not included, the values were obtained from a known formula for the solution. In either case, the values are exact to eight places to the right of the decimal point.

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\)
    0.1 0.0376 0.0162 0.0076
    0.2 0.0486 0.0218 0.0104
    0.3 0.0517 0.0238 0.0115
    0.4 0.0523 0.0244 0.0119
    0.5 0.0522 0.0247 0.0121
    0.6 0.0521 0.0248 0.0121
    0.7 0.0522 0.0249 0.0122
    0.8 0.0522 0.0250 0.0122
    0.9 0.0519 0.0248 0.0121
    1.0 0.0508 0.0243 0.0119
    Table 3.1.4 : Errors in approximate solutions of \(y'=-2y^2+xy+x^2,\ y(0)=1\), obtained by Euler’s method.

    Truncation Error in Euler’s Method

    Consistent with the results indicated in Tables 3.1.1 - 3.1.4 , we will now show that under reasonable assumptions on \(f\) there’s a constant \(K\) such that the error in approximating the solution of the initial value problem

    \[y'=f(x,y),\quad y(x_0)=y_0,\nonumber \]

    at a given point \(b>x_0\) by Euler’s method with step size \(h=(b-x_0)/n\) satisfies the inequality

    \[|y(b)-y_n|\le Kh,\nonumber \]

    where \(K\) is a constant independent of \(n\).

    There are two sources of error (not counting roundoff) in Euler’s method:

    1. The error committed in approximating the integral curve by the tangent line Equation \ref{eq:3.1.2} over the interval \([x_i,x_{i+1}]\).
    2. The error committed in replacing \(y(x_i)\) by \(y_i\) in Equation \ref{eq:3.1.2} and using Equation \ref{eq:3.1.4} rather than Equation \ref{eq:3.1.2} to compute \(y_{i+1}\).

    Euler’s method assumes that \(y_{i+1}\) defined in Equation \ref{eq:3.1.2} is an approximation to \(y(x_{i+1})\). We call the error in this approximation the local truncation error at the \(i\)th step, and denote it by \(T_i\); thus,

    \[\label{eq:3.1.8} T_i=y(x_{i+1})-y(x_i)-hf(x_i,y(x_i)). \]

    we will now use Taylor’s theorem to estimate \(T_i\), assuming for simplicity that \(f\), \(f_x\), and \(f_y\) are continuous and bounded for all \((x,y)\). Then \(y''\) exists and is bounded on \([x_0,b]\). To see this, we differentiate

    \[y'(x)=f(x,y(x))\nonumber \]

    to obtain

    \[\begin{aligned} y''(x) & = & f_x(x,y(x))+f_y(x,y(x))y'(x)\\[4pt] & = & f_x(x,y(x))+f_y(x,y(x))f(x,y(x)).\end{aligned}\nonumber \]

    Since we assumed that \(f\), \(f_x\) and \(f_y\) are bounded, there’s a constant \(M\) such that

    \[|f_{x}(x,y(x))+f_{y}(x,y(x))y'(x)|\leq M\quad x_{0}<x<b\nonumber \]

    which implies that

    \[\label{eq:3.1.9} |y''(x)|\leq M,\quad x_{0}<x<b \]

    Since \(x_{i+1}=x_i+h\), Taylor’s theorem implies that

    \[y(x_{i+1})=y(x_i)+hy'(x_i)+{h^2\over2}y''(\tilde x_i),\nonumber \]

    where \(\tilde x_i\) is some number between \(x_i\) and \(x_{i+1}\). Since \(y'(x_i)=f(x_i,y(x_i))\) this can be written as

    \[y(x_{i+1})=y(x_i)+hf(x_i,y(x_i))+{h^2\over2}y''(\tilde x_i), \nonumber \]

    or, equivalently,

    \[y(x_{i+1})-y(x_i)-hf(x_i,y(x_i))={h^2\over2}y''(\tilde x_i). \nonumber \]

    Comparing this with Equation \ref{eq:3.1.8} shows that

    \[T_i={h^2\over2}y''(\tilde x_i). \nonumber \]

    Recalling Equation \ref{eq:3.1.9}, we can establish the bound

    \[\label{eq:3.1.10} |T_i|\le{Mh^2\over2},\quad 1\le i\le n. \]

    Although it may be difficult to determine the constant \(M\), what is important is that there’s an \(M\) such that Equation \ref{eq:3.1.10} holds. We say that the local truncation error of Euler’s method is of order \(h^2\), which we write as \(O(h^2)\).

    Note that the magnitude of the local truncation error in Euler’s method is determined by the second derivative \(y''\) of the solution of the initial value problem. Therefore the local truncation error will be larger where \(|y''|\) is large, or smaller where \(|y''|\) is small.

    Since the local truncation error for Euler’s method is \(O(h^2)\), it is reasonable to expect that halving \(h\) reduces the local truncation error by a factor of 4. This is true, but halving the step size also requires twice as many steps to approximate the solution at a given point. To analyze the overall effect of truncation error in Euler’s method, it is useful to derive an equation relating the errors

    \[e_{i+1}=y(x_{i+1})-y_{i+1}\quad \text{and} \quad e_i=y(x_i)-y_i. \nonumber \]

    To this end, recall that

    \[\label{eq:3.1.11} y(x_{i+1})=y(x_i)+hf(x_i,y(x_i))+T_i \]

    and

    \[\label{eq:3.1.12} y_{i+1}=y_i+hf(x_i,y_i). \]

    Subtracting Equation \ref{eq:3.1.12} from Equation \ref{eq:3.1.11} yields

    \[\label{eq:3.1.13} e_{i+1}=e_i+h\left[f(x_i,y(x_i))-f(x_i,y_i)\right]+T_i. \]

    The last term on the right is the local truncation error at the \(i\)th step. The other terms reflect the way errors made at previous steps affect \(e_{i+1}\). Since \(|T_i|\le Mh^2/2\), we see from Equation \ref{eq:3.1.13} that

    \[\label{eq:3.1.14} |e_{i+1}|\le |e_i|+h|f(x_i,y(x_i))-f(x_i,y_i)|+{Mh^2\over2}. \]

    Since we assumed that \(f_y\) is continuous and bounded, the mean value theorem implies that

    \[f(x_i,y(x_i))-f(x_i,y_i)=f_y(x_i,y_i^*)(y(x_i)-y_i)=f_y(x_i,y_i^*)e_i, \nonumber \]

    where \(y_i^*\) is between \(y_i\) and \(y(x_i)\). Therefore

    \[|f(x_i,y(x_i))-f(x_i,y_i)|\le R|e_i| \nonumber \]

    for some constant \(R\). From this and Equation \ref{eq:3.1.14},

    \[\label{eq:3.1.15} |e_{i+1}|\le (1+Rh)|e_i|+{Mh^2\over2},\quad 0\le i\le n-1. \]

    For convenience, let \(C=1+Rh\). Since \(e_0=y(x_0)-y_0=0\), applying Equation \ref{eq:3.1.15} repeatedly yields

    \[\begin{align} |e_1| & \le {Mh^2\over2}\nonumber\\[4pt] |e_2| & \le C|e_1|+{Mh^2\over2}\le(1+C){Mh^2\over2}\nonumber\\[4pt] |e_3| & \le C|e_2|+{Mh^2\over2}\le(1+C+C^2){Mh^2\over2}\nonumber\\[4pt] & \vdots \nonumber \\[4pt]|e_n| & \le C|e_{n-1}|+{Mh^2\over2}\le(1+C+\cdots+C^{n-1}){Mh^2\over2}.\label{eq:3.1.16} \end{align} \]

    Recalling the formula for the sum of a geometric series, we see that

    \[1+C+\cdots+C^{n-1}={1-C^n\over 1-C}={(1+Rh)^n-1\over Rh} \nonumber \]

    (since \(C=1+Rh\)). From this and Equation \ref{eq:3.1.16},

    \[\label{eq:3.1.17} |y(b)-y_n|=|e_n|\le{(1+Rh)^n-1\over R}{Mh\over2}. \]

    Since Taylor’s theorem implies that

    \[1+Rh<e^{rh}\nonumber \]

    (verify),

    \[(1+Rh)^{n}<e^{nRh}=e^{R(b-x_{0})}\quad (\text{since }nh=b-x_{0}).\nonumber \]

    This and Equation \ref{eq:3.1.17} imply that

    \[\label{eq:3.1.18} |y(b)-y_n|\le Kh, \]

    with

    \[K=M{e^{R(b-x_0)}-1\over2R}.\nonumber \]

    Because of Equation \ref{eq:3.1.18} we say that the global truncation error of Euler’s method is of order \(h\), which we write as \(O(h)\).

    Semilinear Equations and Variation of Parameters

    An equation that can be written in the form

    \[\label{eq:3.1.19} y'+p(x)y=h(x,y) \]

    with \(p\not\equiv0\) is said to be semilinear. (Of course, Equation \ref{eq:3.1.19} is linear if \(h\) is independent of \(y\).) One way to apply Euler’s method to an initial value problem

    \[\label{eq:3.1.20} y'+p(x)y=h(x,y),\quad y(x_0)=y_0 \]

    for Equation \ref{eq:3.1.19} is to think of it as

    \[y'=f(x,y),\quad y(x_0)=y_0, \nonumber \]

    where

    \[f(x,y)=-p(x)y+h(x,y). \nonumber \]

    However, we can also start by applying variation of parameters to Equation \ref{eq:3.1.20}, as in Sections 2.1 and 2.4; thus, we write the solution of Equation \ref{eq:3.1.20} as \(y=uy_1\), where \(y_1\) is a nontrivial solution of the complementary equation \(y'+p(x)y=0\). Then \(y=uy_1\) is a solution of Equation \ref{eq:3.1.20} if and only if \(u\) is a solution of the initial value problem

    \[\label{eq:3.1.21} u'=h(x,uy_1(x))/y_1(x),\quad u(x_0)=y(x_0)/y_1(x_0). \]

    We can apply Euler’s method to obtain approximate values \(u_0\), \(u_1\), …, \(u_n\) of this initial value problem, and then take

    \[y_i=u_iy_1(x_i) \nonumber \]

    as approximate values of the solution of Equation \ref{eq:3.1.20}. we will call this procedure the Euler semilinear method.

    The next two examples show that the Euler and Euler semilinear methods may yield drastically different results.

    Example 3.1.2

    In Example 3.1.7 we had to leave the solution of the initial value problem

    \[\label{eq:3.1.22} y'-2xy=1,\quad y(0)=3 \]

    in the form

    \[\label{eq:3.1.23} y=e^{x^2}\left(3 +\int^x_0 e^{-t^2}dt\right) \]

    because it was impossible to evaluate this integral exactly in terms of elementary functions. Use step sizes \(h=0.2\), \(h=0.1\), and \(h=0.05\) to find approximate values of the solution of Equation \ref{eq:3.1.22} at \(x=0\), \(0.2\), \(0.4\), \(0.6\), …, \(2.0\) by (a) Euler’s method; (b) the Euler semilinear method.

    Solution a

    Rewriting Equation \ref{eq:3.1.22} as

    \[\label{eq:3.1.24} y'=1+2xy,\quad y(0)=3 \]

    and applying Euler’s method with \(f(x,y)=1+2xy\) yields the results shown in Table 3.1.5 . Because of the large differences between the estimates obtained for the three values of \(h\), it would be clear that these results are useless even if the “exact” values were not included in the table.

    \(x\) \(h=0.2\) \(h=0.1\) \(h=0.05\) Exact
    0.0 3.000000000 3.000000000 3.000000000 3.000000000
    0.2 3.200000000 3.262000000 3.294348537 3.327851973
    0.4 3.656000000 3.802028800 3.881421103 3.966059348
    0.6 4.440960000 4.726810214 4.888870783 5.067039535
    0.8 5.706790400 6.249191282 6.570796235 6.936700945
    1.0 7.732963328 8.771893026 9.419105620 10.184923955
    1.2 11.026148659 13.064051391 14.405772067 16.067111677
    1.4 16.518700016 20.637273893 23.522935872 27.289392347
    1.6 25.969172024 34.570423758 41.033441257 50.000377775
    1.8 42.789442120 61.382165543 76.491018246 98.982969504
    2.0 73.797840446 115.440048291 152.363866569 211.954462214
    Table 3.1.5 : Numerical solution of \(y'-2xy=1,\ y(0)=3\), with Euler’s method.

    It’s easy to see why Euler’s method yields such poor results. Recall that the constant \(M\) in Equation \ref{eq:3.1.10} – which plays an important role in determining the local truncation error in Euler’s method – must be an upper bound for the values of the second derivative \(y''\) of the solution of the initial value problem Equation \ref{eq:3.1.22} on \((0,2)\). The problem is that \(y''\) assumes very large values on this interval. To see this, we differentiate Equation \ref{eq:3.1.24} to obtain

    \[y''(x)=2y(x)+2xy'(x)=2y(x)+2x(1+2xy(x))=2(1+2x^2)y(x)+2x, \nonumber \]

    where the second equality follows again from Equation \ref{eq:3.1.24}. Since Equation \ref{eq:3.1.23} implies that \(y(x)>3e^{x^2}\) if \(x>0\),

    \[y''(x)>6(1+2x^2)e^{x^2}+2x,\quad x>0. \nonumber \]

    For example, letting \(x=2\) shows that \(y''(2)>2952\).

    Solution b

    Since \(y_1=e^{x^2}\) is a solution of the complementary equation \(y'-2xy=0\), we can apply the Euler semilinear method to Equation \ref{eq:3.1.22}, with

    \[y=ue^{x^2}\quad \text{and} \quad u'=e^{-x^2},\quad u(0)=3. \nonumber \]

    The results listed in Table 3.1.6 are clearly better than those obtained by Euler’s method.

    \(x\) \(h=0.2\) \(h=0.1\) \(h=0.05\) Exact
    0.0 3.000000000 3.000000000 3.000000000 3.000000000
    0.2 3.330594477 3.329558853 3.328788889 3.327851973
    0.4 3.980734157 3.974067628 3.970230415 3.966059348
    0.6 5.106360231 5.087705244 5.077622723 5.067039535
    0.8 7.021003417 6.980190891 6.958779586 6.936700945
    1.0 10.350076600 10.269170824 10.227464299 10.184923955
    1.2 16.381180092 16.226146390 16.147129067 16.067111677
    1.4 27.890003380 27.592026085 27.441292235 27.289392347
    1.6 51.183323262 50.594503863 50.298106659 50.000377775
    1.8 101.424397595 100.206659076 99.595562766 98.982969504
    2.0 217.301032800 214.631041938 213.293582978 211.954462214
    Table 3.1.6 : Numerical solution of \(y'-2xy=1,\ y(0)=3\), by the Euler semilinear method.

    We cannot give a general procedure for determining in advance whether Euler’s method or the semilinear Euler method will produce better results for a given semilinear initial value problem Equation \ref{eq:3.1.19}. As a rule of thumb, the Euler semilinear method will yield better results than Euler’s method if \(|u''|\) is small on \([x_0,b]\), while Euler’s method yields better results if \(|u''|\) is large on \([x_0,b]\). In many cases the results obtained by the two methods don’t differ appreciably. However, we propose the an intuitive way to decide which is the better method: Try both methods with multiple step sizes, as we did in Example [example:3.1.4}, and accept the results obtained by the method for which the approximations change less as the step size decreases.

    Example 3.1.5

    Applying Euler’s method with step sizes \(h=0.1\), \(h=0.05\), and \(h=0.025\) to the initial value problem

    \[\label{eq:3.1.25} y'-2y={x\over1+y^2},\quad y(1)=7 \]on \([1,2]\) yields the results in Table 3.1.7 .

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) Exact
    1.0 7.000000000 7.000000000 7.000000000 7.000000000
    1.1 8.402000000 8.471970569 8.510493955 8.551744786
    1.2 10.083936450 10.252570169 10.346014101 10.446546230
    1.3 12.101892354 12.406719381 12.576720827 12.760480158
    1.4 14.523152445 15.012952416 15.287872104 15.586440425
    1.5 17.428443554 18.166277405 18.583079406 19.037865752
    1.6 20.914624471 21.981638487 22.588266217 23.253292359
    1.7 25.097914310 26.598105180 27.456479695 28.401914416
    1.8 30.117766627 32.183941340 33.373738944 34.690375086
    1.9 36.141518172 38.942738252 40.566143158 42.371060528
    2.0 43.369967155 47.120835251 49.308511126 51.752229656
    Table 3.1.7 : Numerical solution of \(y'-2y=x/(1+y^2),\ y(1)=7\), by Euler’s method.

    Applying the Euler semilinear method with

    \[y=ue^{2x}\quad \text{and} \quad u'={xe^{-2x}\over1+u^2e^{4x}},\quad u(1)=7e^{-2}\nonumber \]

    yields the results in Table 3.1.8 . Since the latter are clearly less dependent on step size than the former, we conclude that the Euler semilinear method is better than Euler’s method for Equation \ref{eq:3.1.25}. This conclusion is supported by comparing the approximate results obtained by the two methods with the “exact” values of the solution.

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) Exact
    1.0 7.000000000 7.000000000 7.000000000 7.000000000
    1.1 8.552262113 8.551993978 8.551867007 8.551744786
    1.2 10.447568674 10.447038547 10.446787646 10.446546230
    1.3 12.762019799 12.761221313 12.760843543 12.760480158
    1.4 15.588535141 15.587448600 15.586934680 15.586440425
    1.5 19.040580614 19.039172241 19.038506211 19.037865752
    1.6 23.256721636 23.254942517 23.254101253 23.253292359
    1.7 28.406184597 28.403969107 28.402921581 28.401914416
    1.8 34.695649222 34.692912768 34.691618979 34.690375086
    1.9 42.377544138 42.374180090 42.372589624 42.371060528
    2.0 51.760178446 51.756054133 51.754104262 51.752229656
    Table 3.1.8 : Numerical solution of \(y'-2y=x/(1+y^2),\ y(1)=7\), by the Euler semilinear method.
    Example 3.1.6

    Applying Euler’s method with step sizes \(h=0.1\), \(h=0.05\), and \(h=0.025\) to the initial value problem

    \[\label{eq:3.1.26} y'+3x^2y=1+y^2,\quad y(2)=2 \]

    on \([2,3]\) yields the results in Table 3.1.9 . Applying the Euler semilinear method with

    \[y=ue^{-x^3}\quad \text{and} \quad u'=e^{x^3}(1+u^2e^{-2x^3}),\quad u(2)=2e^8 \nonumber \]

    yields the results in Table 3.1.10 . Noting the close agreement among the three columns of Table 3.1.9 (at least for larger values of \(x\)) and the lack of any such agreement among the columns of Table 3.1.10 , we conclude that Euler’s method is better than the Euler semilinear method for Equation \ref{eq:3.1.26}. Comparing the results with the exact values supports this conclusion.

    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) Exact
    2.0 2.000000000 2.000000000 2.000000000 2.000000000
    2.1 0.100000000 0.493231250 0.609611171 0.701162906
    2.2 0.068700000 0.122879586 0.180113445 0.236986800
    2.3 0.069419569 0.070670890 0.083934459 0.103815729
    2.4 0.059732621 0.061338956 0.063337561 0.068390786
    2.5 0.056871451 0.056002363 0.056249670 0.057281091
    2.6 0.050560917 0.051465256 0.051517501 0.051711676
    2.7 0.048279018 0.047484716 0.047514202 0.047564141
    2.8 0.042925892 0.043967002 0.043989239 0.044014438
    2.9 0.042148458 0.040839683 0.040857109 0.040875333
    3.0 0.035985548 0.038044692 0.038058536 0.038072838:
    Table 3.1.9 Numerical solution of \(y'+3x^2y=1+y^2,\quad y(2)=2\), by Euler’s method.
    \(x\) \(h=0.1\) \(h=0.05\) \(h=0.025\) \(h=.0125\)
    2.0 2.000000000 2.000000000 2.000000000 2.000000000
    2.1 0.708426286 0.702568171 0.701214274 0.701162906
    2.2 0.214501852 0.222599468 0.228942240 0.236986800
    2.3 0.069861436 0.083620494 0.092852806 0.103815729
    2.4 0.032487396 0.047079261 0.056825805 0.068390786
    2.5 0.021895559 0.036030018 0.045683801 0.057281091
    2.6 0.017332058 0.030750181 0.040189920 0.051711676
    2.7 0.014271492 0.026931911 0.036134674 0.047564141
    2.8 0.011819555 0.023720670 0.032679767 0.044014438
    2.9 0.009776792 0.020925522 0.029636506 0.040875333
    3.0 0.008065020 0.018472302 0.026931099 0.038072838
    Table 3.1.10 : Numerical solution of \(y'+3x^2y=1+y^2,\quad y(2)=2\), by the Euler semilinear method.

    In the next two sections we will study other numerical methods for solving initial value problems, called the improved Euler method, the midpoint method, Heun’s method and the Runge- Kutta method. If the initial value problem is semilinear as in Equation \ref{eq:3.1.19}, we also have the option of using variation of parameters and then applying the given numerical method to the initial value problem Equation \ref{eq:3.1.21} for \(u\). By analogy with the terminology used here, we will call the resulting procedure the improved Euler semilinear method, the midpoint semilinear method, Heun’s semilinear method or the Runge- Kutta semilinear method, as the case may be.


    This page titled 3.1: Euler's Method is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench.