Skip to main content
Mathematics LibreTexts

4.2: Power Series Method

  • Page ID
    91064
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    IN THE LAST EXAMPLE WE WERE ABLE to use the initial condition to produce a series solution to the given differential equation. Even if we specified more general initial conditions, are there other ways to obtain series solutions? Can we find a general solution in the form of power series? We will address these questions in the remaining sections. However, we will first begin with an example to demonstrate how we can find the general solution to a first order differential equation.

    Example \(\PageIndex{1}\)

    Find a general Maclaurin series solution to the ODE: \(y^{\prime}-2 x y=0\).

    Let’s assume that the solution takes the form

    \[y(x)=\sum_{n=0}^{\infty} c_{n} x^{n}\nonumber \]

    The goal is to find the expansion coefficients, \(c_{n}, n=0,1, \ldots\)

    Differentiating, we have

    \[y^{\prime}(x)=\sum_{n=1}^{\infty} n c_{n} x^{n-1} \nonumber \]

    Note that the index starts at \(n=1\), since there is no \(n=0\) term remaining.

    Inserting the series for \(y(x)\) and \(y^{\prime}(x)\) into the differential equation, we have

    \[\begin{equation} \begin{aligned} 0=& \sum_{n=1}^{\infty} n c_{n} x^{n-1}-2 x \sum_{n=0}^{\infty} c_{n} x^{n} \\ =&\left(c_{1}+2 c_{2} x+3 c_{3} x^{2}+4 x^{3}+\ldots\right) \\ &-2 x\left(c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\ldots\right) \\ =& c_{1}+\left(2 c_{2}-c_{0}\right) x+\left(3 c_{3}-2 c_{1}\right) x^{2}+\left(4 c_{4}-2 c_{2}\right) x^{3}+\ldots \end{aligned} \end{equation}\label{4.7} \]

    Equating like powers of \(x\) on both sides of this result, we have

    \[\begin{equation} \begin{aligned} 0 &=c_{1} \\ 0 &=2 c_{2}-c_{0} \\ 0 &=3 c_{3}-c_{1} \\ 0 &=4 c_{4}-2 c_{2}, \ldots \end{aligned} \end{equation}\label{4.8} \]

    We can solve these sequentially for the coefficient of largest index:

    \[c_{1}=0, c_{2}=c_{0}, c_{3}=\dfrac{2}{3} c_{1}=0, c_{3}=\dfrac{1}{2} c_{2}=\dfrac{1}{2} c_{0}, \ldots \nonumber \]

    We note that the odd terms vanish and the even terms survive:

    \[\begin{equation} \begin{aligned} y(x) &=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\ldots \\ &=c_{0}+c_{0} x^{2}+\dfrac{1}{2} c_{0} x^{4}+\ldots \end{aligned} \end{equation} \label{4.9} \]

    Thus, we have found a series solution, or at least the first several terms, up to a multiplicative constant.

    Of course, it would be nice to obtain a few more terms and guess at the general form of the series solution. This could be done if we carried out the streps in a more general way. This is accomplished by keeping the summation notation and trying to combine all terms with like powers of \(x\). We begin by inserting the series expansion into the differential equation and identifying the powers of \(x\) :

    \[\begin{equation} \begin{aligned} 0 &=\sum_{n=1}^{\infty} n c_{n} x^{n-1}-2 x \sum_{n=0}^{\infty} c_{n} x^{n} \\ &=\sum_{n=1}^{\infty} n c_{n} x^{n-1}-\sum_{n=0}^{\infty} 2 c_{n} x^{n+1} \end{aligned} \end{equation}\label{4.10} \]

    We note that the powers of \(x\) in these two sums differ by 2 . We can re-indexing a series. re-index the sums separately so that the powers are the same, say \(k\). After all, when we had expanded these series earlier, the index, \(n\), disappeared. Such an index is known as a dummy index since we could call the index anything, like \(n-1, \ell-1\), or even \(k=n-1\) in the first series. So, we can let \(k=n-1\), or \(n=k+1\), to write

    \[\begin{equation} \begin{aligned} \sum_{n=1}^{\infty} n c_{n} x^{n-1} &=\sum_{k=0}^{\infty}(k+1) c_{k+1} x^{k} \\ &=c_{1}+2 c_{2} x+3 c_{3} x^{2}+4 x^{3}+\ldots \end{aligned} \end{equation}\label{4.11} \]

    Note, that re-indexing has not changed the terms in the series.

    Similarly, we can let \(k=n+1\), or \(n=k-1\), in the second series to find

    \[\begin{equation} \begin{aligned} \sum_{n=0}^{\infty} 2 c_{n} x^{n+1} &=\sum_{k=1}^{\infty} 2 c_{k-1} x^{k} \\ &=2 c_{0}+2 c_{1} x+2 c_{2} x^{2}+2 c_{3} x^{3}+\ldots \end{aligned} \end{equation} \label{4.12} \]

    Combining both series, we have

    \[\begin{equation} \begin{aligned} 0 &=\sum_{n=1}^{\infty} n c_{n} x^{n-1}-\sum_{n=0}^{\infty} 2 c_{n} x^{n+1} \\ &=\sum_{k=0}^{\infty}(k+1) c_{k+1} x^{k}-\sum_{k=1}^{\infty} 2 c_{k-1} x^{k} \\ &=c_{1}+\sum_{k=1}^{\infty}\left[(k+1) c_{k+1}-2 c_{k-1}\right] x^{k} \end{aligned} \end{equation}\label{4.13} \]

    Here, we have combined the two series for \(k=1,2, \ldots . .\) The \(k=0\) term in the first series gives the constant term as shown.

    We can now set the coefficients of powers of \(x\) equal to zero since there are no terms on the left hand side of the equation. This gives \(c_{1}=0\) and

    \[(k+1) c_{k+1}-2 c_{k-1}, \quad k=1,2, \ldots \nonumber \]

    This last equation is called a recurrence relation. It can be used to find successive coefficients in terms of previous values. In particular, we have

    \[c_{k+1}=\dfrac{2}{k+1} c_{k-1}, \quad k=1,2, \ldots. \nonumber \]

    Inserting different values of \(k\), we have

    \[\begin{array}{ll} k=1: & c_{2}=\dfrac{2}{2} c_{0}=c_{0} . \\ k=2: & c_{3}=\dfrac{2}{3} c_{1}=0 . \\ k= 3: & c_{4}=\dfrac{2}{4} c_{2}=\dfrac{1}{2} c_{0} . \\ k= 4: & c_{5}=\dfrac{2}{5} c_{3}=0 . \\ k= 5: &c_{6}=\dfrac{2}{6} c_{4}=\dfrac{1}{3(2)} c_{0} . \end{array} \label{4.14} \]

    Continuing, we can see a pattern. Namely,

    \[c_{k}=\left\{\begin{array}{cc} 0, & k=2 \ell+1 \\ \dfrac{1}{\ell !}, & k=2 \ell \end{array}\right. \nonumber \]

    Thus,

    \[\begin{equation} \begin{aligned} y(x) &=\sum_{k=0}^{\infty} c_{k} x^{k} \\ &=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\ldots \\ &=c_{0}+c_{0} x^{2}+\dfrac{1}{2 !} c_{0} x^{4}+\dfrac{1}{3 !} c_{0} x^{6}+\ldots \\ &=c_{0}\left(1+x^{2}+\dfrac{1}{2 !} x^{4}+\dfrac{1}{3 !} x^{6}+\ldots\right) \\ &=c_{0} \sum_{\ell=0}^{\infty} \dfrac{1}{\ell !} x^{2 \ell} \\ &=c_{0} e^{x^{2}} \end{aligned} \end{equation} \label{4.15} \]

    This example demonstrated how we can solve a simple differential equation by first guessing that the solution was in the form of a power series. We would like to explore the use of power series for more general higher order equations. We will begin second order differential equations in the form

    \[P(x) y^{\prime \prime}(x)+Q(x) y^{\prime}(x)+R(x) y(x)=0 \nonumber \]

    where \(P(x), Q(x)\), and \(R(x)\) are polynomials in \(x .\) The point \(x_{0}\) is called an ordinary point if \(P\left(x_{0}\right) \neq 0\). Otherwise, \(x_{0}\) is called a singular point. (Ordinary and singular points.) When \(x_{0}\) is an ordinary point, then we can seek solutions of the form

    \[y(x)=\sum_{n=0}^{\infty} c_{n}\left(x-x_{0}\right)^{n} \nonumber \]

    For most of the examples, we will let \(x_{0}=0\), in which case we seek solutions of the form

    \[y(x)=\sum_{n=0}^{\infty} c_{n} x^{n} \nonumber \]

    Example \(\PageIndex{2}\)

    Find the general Maclaurin series solution to the ODE:

    \[y^{\prime \prime}-x y^{\prime}-y=0 \nonumber \]

    We will look for a solution of the form

    \[y(x)=\sum_{n=0}^{\infty} c_{n} x^{n}\nonumber \]

    The first and second derivatives of the series are given by

    \[\begin{gathered} y^{\prime}(x)=\sum_{n=1}^{\infty} c_{n} n x^{n-1} \\ y^{\prime \prime}(x)=\sum_{n=2}^{\infty} c_{n} n(n-1) x^{n-2} \end{gathered} \nonumber \]

    Inserting these derivatives into the differential equation gives

    \[0=\sum_{n=2}^{\infty} c_{n} n(n-1) x^{n-2}-\sum_{n=1}^{\infty} c_{n} n x^{n}-\sum_{n=0}^{\infty} c_{n} x^{n}\nonumber \]

    We want to combine the three sums into one sum and identify the coefficients of each power of \(x .\) The last two sums have similar powers of \(x .\) So, we need only re-index the first sum. We let \(k=n-2\), or \(n=k+2\). This gives

    \[\sum_{n=2}^{\infty} c_{n} n(n-1) x^{n-2}=\sum_{k=0}^{\infty} c_{k+2}(k+2)(k+1) x^{k}\nonumber \]

    Inserting this sum, and setting \(n=k\) in the other two sums, we have

    \[ \begin{aligned} 0 &=\sum_{n=2}^{\infty} c_{n} n(n-1) x^{n-2}-\sum_{n=1}^{\infty} c_{n} n x^{n}-\sum_{n=0}^{\infty} c_{n} x^{n} \\ &=\sum_{k=0}^{\infty} c_{k+2}(k+2)(k+1) x^{k}-\sum_{k=1}^{\infty} c_{k} k x^{k}-\sum_{k=0}^{\infty} c_{k} x^{k} \\ &=\sum_{k=1}^{\infty}\left[c_{k+2}(k+2)(k+1)-c_{k} k-c_{k}\right] x^{k}+c_{2}(2)(1)-c_{0} \\ &=\sum_{k=1}^{\infty}(k+1)\left[(k+2) c_{k+2}-c_{k}\right] x^{k}+2 c_{2}-c_{0} \end{aligned} \label{4.16} \]

    Noting that the coefficients of powers \(x^{k}\) have to vanish, we have \(2 c_{2}-c_{0}=0\) and

    \[(k+1)\left[(k+2) c_{k+2}-c_{k}\right]=0, \quad k=1,2,3, \ldots \nonumber \]

    or

    \[ \begin{aligned} c_{2} &=\dfrac{1}{2} c_{0}, \\ c_{k+2} &=\dfrac{1}{k+2} c_{k}, \quad k=1,2,3, \ldots \end{aligned} \label{4.17} \]

    Using this result, we can successively determine the coefficients to as many terms as we need.

    \[ \begin{array}{ll} k=1: & c_{3}=\dfrac{1}{3} c_{1} . \\ k=2: & c_{4}=\dfrac{1}{4} c_{2}=\dfrac{1}{8} c_{0} . \\ k= 3: & c_{5}=\dfrac{1}{5} c_{3}=\dfrac{1}{15} c_{1} . \\ k= 4: & c_{6}=\dfrac{1}{6} c_{4}=\dfrac{1}{48} c_{0} . \\ k= 5: & c_{7}=\dfrac{1}{7} c_{5}=\dfrac{1}{105} c_{1} . \end{array}\label{4.18} \]

    This gives the series solution as

    \[ \begin{aligned} y(x) &=\sum_{n=0}^{\infty} c_{n} x^{n} \\ &=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\ldots \\ &=c_{0}+c_{1} x+\dfrac{1}{2} c_{0} x^{2}+\dfrac{1}{3} c_{1} x^{3}+\dfrac{1}{8} c_{0} x^{4}+\dfrac{1}{15} c_{1} x^{5}+\dfrac{1}{48} c_{0} x^{6}+\ldots \\ &=c_{0}\left(1+\dfrac{1}{2} x^{2}+\dfrac{1}{8} x^{4}+\ldots\right)+c_{1}\left(x+\dfrac{1}{3} x^{3}+\dfrac{1}{15} x^{5}+\ldots\right) . \end{aligned} \label{4.19} \]

    We note that the general solution to this second order differential equation has two arbitrary constants. The general solution is a linear combination of two linearly independent solutions obtained by setting one of the constants equal to one and the other equal to zero.

    Sometimes one can sum the series solution obtained. In this case we note that the series multiplying \(c_{0}\) can be rewritten as

    \(y_{1}(x)=1+\dfrac{1}{2} x^{2}+\dfrac{1}{8} x^{4}+\ldots=1+\dfrac{x^{2}}{2}+\dfrac{1}{2}\left(\dfrac{x^{2}}{2}\right)^{2}++\dfrac{1}{3 !}\left(\dfrac{x^{2}}{2}\right)^{3}+\ldots \nonumber\)

    This gives the exact solution \(y_{1}(x)=e^{x^{2} / 2}\)

    The second linearly independent solution is not so easy. Since we know one solution, we can use the Method of Reduction of Order to obtain the second solution. One can verify that the second solution is given by

    \[y_{2}(x)=e^{x^{2} / 2} \int_{0}^{x / \sqrt{2}} e^{-t^{2}} d t=e^{x^{2} / 2} \operatorname{erf}\left(\dfrac{x}{\sqrt{2}}\right) \nonumber \]

    where \(\operatorname{erf}(x)\) is the error function. See Problem 3 .

    Example \(\PageIndex{3}\)

    Consider the Legendre equation

    \[\left(1-x^{2}\right) y^{\prime \prime}-2 x y^{\prime}+\ell(\ell+1) y=0\nonumber \]

    for \(\ell\) an integer.

    We first note that there are singular points for \(1-x^{2}=0\), or \(x=\pm 1 .\)

    (Legendre’s differential equation.)Therefore, \(x=0\) is an ordinary point and we can proceed to obtain solutions in the form of Maclaurin series expansions. Insert the series expansions

    \[ \begin{aligned} y(x) &=\sum_{n=0}^{\infty} c_{n} x^{n} \\ y^{\prime}(x) &=\sum_{n=1}^{\infty} n c_{n} x^{n-1} \\ y^{\prime \prime}(x) &=\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2} \end{aligned} \label{4.20} \]

    into the differential equation to obtain

    \[ \begin{aligned} 0 &=\left(1-x^{2}\right) y^{\prime \prime}-2 x y^{\prime}+\ell(\ell+1) y \\ &=\left(1-x^{2}\right) \sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}-2 x \sum_{n=1}^{\infty} n c_{n} x^{n-1}+\ell(\ell+1) \sum_{n=0}^{\infty} c_{n} x^{n} \\ &=\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}-\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n}-\sum_{n=1}^{\infty} 2 n c_{n} x^{n}+\sum_{n=0}^{\infty} \ell(\ell+1) c_{n} x^{n} \\ &=\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}+\sum_{n=0}^{\infty}[\ell(\ell+1)-n(n+1)] c_{n} x^{n} \end{aligned} \label{4.21} \]

    Re-indexing the first sum with \(k=n-2\), we have

    \[ \begin{aligned} 0=& \sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}+\sum_{n=0}^{\infty}[\ell(\ell+1)-n(n+1)] c_{n} x^{n} \\ =& \sum_{k=0}^{\infty}(k+2)(k+1) c_{k+2} x^{k}+\sum_{k=0}^{\infty}[\ell(\ell+1)-k(k+1)] c_{k} x^{k} \\ =& 2 c_{2}+6 c_{3} x+\ell(\ell+1) c_{0}+\ell(\ell+1) c_{1} x-2 c_{1} x \\ &+\sum_{k=2}^{\infty}\left((k+2)(k+1) c_{k+2}+[\ell(\ell+1)-k(k+1)] c_{k}\right) x^{k} \end{aligned} \label{4.22} \]

    Matching terms, we have

    \[ \begin{array}{ll} k=0: & 2 c_{2}=-\ell(\ell+1) c_{0} \\ k=1: & 6 c_{3}=[2-\ell(\ell+1)] c_{1} \\ k \geq 2: & (k+2)(k+1) c_{k+2}=[k(k+1)-\ell(\ell+1)] c_{k} . \end{array} \label{4.23} \]

    For \(\ell=0\), the first equation gives \(c_{2}=0\) and the third equation gives \(c_{2 m}=0\) for \(m=1,2,3, \ldots\) This leads to \(y_{1}(x)=c_{0}\) is a solution for \(\ell=0\).

    Similarly, for \(\ell=1\), the second equation gives \(c_{3}=0\) and the third equation gives \(c_{2 m+1}=0\) for \(m=1,2,3, \ldots\). Thus, \(y_{1}(x)=c_{1} x\) is a solution for \(\ell=1\). In fact, for \(\ell\) any nonnegative integer the series truncates. For example, if \(\ell=2\), then these equations reduce to

    \[ \begin{array}{ll} k=0: & 2 c_{2}=-6 c_{0} \\ k=1: & 6 c_{3}=-4 c_{1} \\ k \geq 2: & (k+2)(k+1) c_{k+2}=[k(k+1)-2(3)] c_{k} \end{array} \label{4.24} \]

    For \(k=2\), we have \(12 c_{4}=0 .\) So, \(c_{6}=c_{8}=\ldots=0 .\) Also, we have \(c_{2}=-3 c_{0} .\) This gives

    \[y(x)=c_{0}\left(1-3 x^{2}\right)+\left(c_{1} x+c_{3} x^{3}+c_{5} x^{5}+c_{7} x^{7}+\ldots\right) \nonumber \]

    Therefore, there is a polynomial solution of degree 2 . The remaining coefficients are proportional to \(c_{1}\), yielding the second linearly independent solution, which is not a polynomial.

    For other nonnegative integer values of \(\ell>2\), we have

    \[c_{k+2}=\dfrac{k(k+1)-\ell(\ell+1)}{(k+2)(k+1)} c_{k}, \quad k \geq 2 \nonumber \]

    When \(k=\ell\), the right side of the equation vanishes, making the remaining coefficients vanish. Thus, we will be left with a polynomial of degree \(\ell\). These are the Legendre polynomials, \(P_{\ell}(x)\).


    This page titled 4.2: Power Series Method is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.