$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 3.2: Series Solutions Near an Ordinary Point I

• • Contributed by William F. Trench
• Andrew G. Cowles Distinguished Professor Emeritus (Mathamatics) at Trinity University
$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

## Series Solutions Near an Ordinary Point

Many physical applications give rise to second order homogeneous linear differential equations of the form

\begin{equation}\label{eq:3.2.1}
P_0(x)y''+P_1(x)y'+P_2(x)y=0,
\end{equation}

where $$P_0$$, $$P_1$$, and $$P_2$$ are polynomials. Usually the solutions of these equations can't be expressed in terms of familiar elementary functions. Therefore we'll consider the problem of representing solutions of \eqref{eq:3.2.1} with series.

We assume throughout that $$P_0$$, $$P_1$$ and $$P_2$$ have no common factors. Then we say that $$x_0$$ is an $$\textcolor{blue}{\mbox{ordinary point}}$$ of \eqref{eq:3.2.1} if $$P_0(x_0)\ne0$$, or a $$\textcolor{blue}{\mbox{singular point}}$$ if $$P_0(x_0)=0$$. For Legendre's equation,

\begin{equation}\label{eq:3.2.2}
(1-x^2)y''-2xy'+\alpha(\alpha+1)y=0,
\end{equation}

$$x_0=1$$ and $$x_0=-1$$ are singular points and all other points are ordinary points. For Bessel's equation,

\begin{eqnarray*}
x^2y''+xy'+(x^2-\nu^2)y=0,
\end{eqnarray*}

$$x_0=0$$ is a singular point and all other points are ordinary points. If $$P_0$$ is a nonzero constant as in Airy's equation,

\begin{equation}\label{eq:3.2.3}
y''-xy=0,
\end{equation}

then every point is an ordinary point.

Since polynomials are continuous everywhere, $$P_1/P_0$$ and $$P_2/P_0$$ are continuous at any point $$x_0$$ that isn't a zero of $$P_0$$. Therefore, if $$x_0$$ is an ordinary point of \eqref{eq:3.2.1} and $$a_0$$ and $$a_1$$ are arbitrary real numbers, then the initial value problem

\begin{equation}\label{eq:3.2.4}
\end{equation}

has a unique solution on the largest open interval that contains $$x_0$$ and does not contain any zeros of $$P_0$$. To see this, we rewrite the differential equation in \eqref{eq:3.2.4} as

\begin{eqnarray*}
y''+{P_1(x)\over P_0(x)}y'+{P_2(x)\over P_0(x)}y=0
\end{eqnarray*}

and apply Theorem $$(2.1.1)$$ with $$p=P_1/P_0$$ and $$q=P_2/P_0$$. In this section and the next we consider the problem of representing solutions of \eqref{eq:3.2.1} by power series that converge for values of $$x$$ near an ordinary point $$x_0$$.

We state the next theorem without proof.

### Theorem $$\PageIndex{1}$$

Suppose $$P_0$$, $$P_1$$, and $$P_2$$ are polynomials with no common factor and $$P_0$$ isn't identically zero. Let $$x_0$$ be a point such that $$P_0(x_0)\ne0,$$ and let $$\rho$$ be the distance from $$x_0$$ to the nearest zero of $$P_0$$ in the complex plane. (If $$P_0$$ is constant, then $$\rho=\infty$$.) Then every solution of

\begin{equation}\label{eq:3.2.5}
P_0(x)y''+P_1(x)y'+P_2(x)y=0
\end{equation}

can be represented by a power series

\begin{equation}\label{eq:3.2.6}
y=\sum_{n=0}^\infty a_n(x-x_0)^n
\end{equation}

that converges at least on the open interval $$(x_0-\rho,x_0+\rho)$$. (If $$P_0$$ is nonconstant, so that $$\rho$$ is necessarily finite, then the open interval of convergence of \eqref{eq:3.2.6} may be larger than $$(x_0-\rho,x_0+\rho).$$ If $$P_0$$ is constant then $$\rho=\infty$$ and $$(x_0-\rho,x_0+\rho)=(-\infty,\infty)$$.)

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

We call \eqref{eq:3.2.6} a $$\textcolor{blue}{\mbox{power series solution in \(x-x_0$$}} \) of \eqref{eq:3.2.5}. We'll now develop a method for finding power series solutions of \eqref{eq:3.2.5}. For this purpose we write \eqref{eq:3.2.5} as $$Ly=0$$, where

\begin{equation}\label{eq:3.2.7}
Ly=P_0y''+P_1y'+P_2y.
\end{equation}

Theorem $$(3.2.1)$$ implies that every solution of $$Ly=0$$ on $$(x_0-\rho,x_0+\rho)$$ can be written as

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_n(x-x_0)^n.
\end{eqnarray*}

Setting $$x=x_0$$ in this series and in the series

\begin{eqnarray*}
y'=\sum_{n=1}^\infty na_n(x-x_0)^{n-1}
\end{eqnarray*}

shows that $$y(x_0)=a_0$$ and $$y'(x_0)=a_1$$. Since every initial value problem \eqref{eq:3.2.4} has a unique solution, this means that $$a_0$$ and $$a_1$$ can be chosen arbitrarily, and $$a_2$$, $$a_3$$, $$\dots$$ are uniquely determined by them.

To find $$a_2$$, $$a_3$$, $$\dots$$, we write $$P_0$$, $$P_1$$, and $$P_2$$ in powers of $$x-x_0$$, substitute

\begin{eqnarray*}
y=\sum^\infty_{n=0}a_n(x-x_0)^n,
\end{eqnarray*}

\begin{eqnarray*}
y'=\sum^\infty_{n=1}na_n(x-x_0)^{n-1},
\end{eqnarray*}

\begin{eqnarray*}
y''=\sum^\infty_{n=2}n(n-1)a_n(x-x_0)^{n-2}
\end{eqnarray*}

into \eqref{eq:3.2.7}, and collect the coefficients of like powers of $$x-x_0$$. This yields

\begin{equation}\label{eq:3.2.8}
Ly=\sum^\infty_{n=0}b_n(x-x_0)^n,
\end{equation}

where $$\{b_0, b_1, \dots, b_n, \dots\}$$ are expressed in terms of $$\{a_0, a_1, \dots,a_n, \dots\}$$ and the coefficients of $$P_0$$, $$P_1$$, and $$P_2$$, written in powers of $$x-x_0$$. Since \eqref{eq:3.2.8} and part (a) of Theorem $$(3.1.6)$$ imply that $$Ly=0$$ if and only if $$b_n=0$$ for $$n\ge0$$, all power series solutions in $$x-x_0$$ of $$Ly=0$$ can be obtained by choosing $$a_0$$ and $$a_1$$ arbitrarily and computing $$a_2$$, $$a_3$$, $$\dots$$, successively so that $$b_n=0$$ for $$n\ge0$$. For simplicity, we call the power series obtained this way $$\textcolor{blue}{\mbox{the power series in \(x-x_0$$ for the general solution}} \) of $$Ly=0$$, without explicitly identifying the open interval of convergence of the series.

### Example $$\PageIndex{1}$$

Let $$x_0$$ be an arbitrary real number. Find the power series in $$x-x_0$$ for the general solution of

\begin{equation}\label{eq:3.2.9}
y''+ y=0.
\end{equation}

Here

\begin{eqnarray*}
Ly=y''+y.
\end{eqnarray*}

If

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_n(x-x_0)^n,
\end{eqnarray*}

then

\begin{eqnarray*}
y''=\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2},
\end{eqnarray*}

so

\begin{eqnarray*}
Ly=\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}+\sum_{n=0}^\infty a_n(x-x_0)^n.
\end{eqnarray*}

To collect coefficients of like powers of $$x-x_0$$, we shift the summation index in the first sum. This yields

\begin{eqnarray*}
Ly=\sum^\infty_{n=0}(n+2)(n+1)a_{n+2}(x-x_0)^n + \sum^\infty_{n=0}a_n(x-x_0)^n =\sum^\infty_{n=0}b_n(x-x_0)^n,
\end{eqnarray*}

with

\begin{eqnarray*}
b_n=(n+2)(n+1)a_{n+2}+a_n.
\end{eqnarray*}

Therefore $$Ly=0$$ if and only if

\begin{equation}\label{eq:3.2.10}
\end{equation}

where $$a_0$$ and $$a_1$$ are arbitrary. Since the indices on the left and right sides of \eqref{eq:3.2.10} differ by two, we write \eqref{eq:3.2.10} separately for $$n$$ even $$(n=2m)$$ and $$n$$ odd $$(n=2m+1)$$. This yields

\begin{eqnarray}
\end{eqnarray}

Computing the coefficients of the even powers of $$x-x_0$$ from \eqref{eq:3.2.11} yields

\begin{eqnarray*}
a_2&=&-{a_0\over2\cdot1}\\
a_4&=&-{a_2\over4\cdot3}=-{1\over4\cdot3} \left(-{a_0\over2\cdot1}\right)= {a_0\over4\cdot3\cdot2\cdot1}, \\
a_6&=&-{a_4\over6\cdot5}=-{1\over6\cdot5} \left({a_0\over4\cdot3\cdot2\cdot1}\right)=-{a_0\over6\cdot5\cdot4\cdot3\cdot 2\cdot1},
\end{eqnarray*}

and, in general,

\begin{equation}\label{eq:3.2.13}
\end{equation}

Computing the coefficients of the odd powers of $$x-x_0$$ from \eqref{eq:3.2.12} yields

\begin{eqnarray*}
a_3&=&-{a_1\over3\cdot2}\\
a_5&=&-{a_3\over5\cdot4}=-{1\over5\cdot4} \left(-{a_1\over3\cdot2}\right)= {a_1\over5\cdot4\cdot3\cdot2}, \\
a_7&=&-{a_5\over7\cdot6}=-{1\over7\cdot6} \left({a_1\over5\cdot4\cdot3\cdot2}\right)=-{a_1\over7\cdot6\cdot5\cdot4\cdot 3\cdot2},
\end{eqnarray*}

and, in general,

\begin{equation}\label{eq:3.2.14}
\end{equation}

Thus, the general solution of \eqref{eq:3.2.9} can be written as

\begin{eqnarray*}
y=\sum_{m=0}^\infty a_{2m}(x-x_0)^{2m}+\sum_{m=0}^\infty a_{2m+1}(x-x_0)^{2m+1},
\end{eqnarray*}

or, from \eqref{eq:3.2.13} and \eqref{eq:3.2.14}, as

\begin{equation}\label{eq:3.2.15}
y=a_0\sum_{m=0}^\infty(-1)^m{(x-x_0)^{2m}\over(2m)!} +a_1\sum_{m=0}^\infty(-1)^m{(x-x_0)^{2m+1}\over(2m+1)!}.
\end{equation}

If we recall from calculus that

\begin{eqnarray*}
\end{eqnarray*}

then \eqref{eq:3.2.15} becomes

\begin{eqnarray*}
y=a_0\cos(x-x_0)+a_1\sin(x-x_0),
\end{eqnarray*}

which should look familiar.

Equations like \eqref{eq:3.2.10}, \eqref{eq:3.2.11}, and \eqref{eq:3.2.12}, which define a given coefficient in the sequence $$\{a_n\}$$ in terms of one or more coefficients with lesser indices are called $$\textcolor{blue}{\mbox{recurrence relations}}$$. When we use a recurrence relation to compute terms of a sequence we're computing $$\textcolor{blue}{\mbox{recursively}}$$.

In the remainder of this section we consider the problem of finding power series solutions in $$x-x_0$$ for equations of the form

\begin{equation}\label{eq:3.2.16}
\left(1+\alpha(x-x_0)^2\right)y''+\beta(x-x_0) y'+\gamma y=0.
\end{equation}

Many important equations that arise in applications are of this form with $$x_0=0$$, including Legendre's equation \eqref{eq:3.2.2}, Airy's equation \eqref{eq:3.2.3}, Chebyshev's equation,

\begin{eqnarray*}
(1-x^2)y''-xy'+\alpha^2 y=0,
\end{eqnarray*}

\begin{eqnarray*}
y''-2xy'+2\alpha y=0.
\end{eqnarray*}

Since

\begin{eqnarray*}
P_0(x)=1+\alpha(x-x_0)^2
\end{eqnarray*}

in \eqref{eq:3.2.16}, the point $$x_0$$ is an ordinary point of \eqref{eq:3.2.16}, and Theorem $$(3.2.1)$$ implies that the solutions of \eqref{eq:3.2.16} can be written as power series in $$x-x_0$$ that converge on the interval $$(x_0-1/\sqrt|\alpha|,x_0+1/\sqrt|\alpha|)$$ if $$\alpha\ne0$$, or on $$(-\infty,\infty)$$ if $$\alpha=0$$. We'll see that the coefficients in these power series can be obtained by methods similar to the one used in Example $$(3.2.1)$$.

To simplify finding the coefficients, we introduce some notation for products:

\begin{eqnarray*}
\end{eqnarray*}

Thus,

\begin{eqnarray*}
\prod^7_{j=2}b_j=b_2b_3b_4b_5b_6b_7,
\end{eqnarray*}

\begin{eqnarray*}
\prod^4_{j=0}(2j+1)=(1)(3)(5)(7)(9)=945,
\end{eqnarray*}

and

\begin{eqnarray*}
\prod^2_{j=2}j^2=2^2=4.
\end{eqnarray*}

We define

\begin{eqnarray*}
\end{eqnarray*}

no matter what the form of $$b_j$$.

### Example $$\PageIndex{2}$$

Find the power series in $$x$$ for the general solution of

\begin{equation}\label{eq:3.2.17}
(1+2x^2)y''+6xy'+2y=0.
\end{equation}

Here

\begin{eqnarray*}
Ly=(1+2x^2)y''+6xy'+2y.
\end{eqnarray*}

If

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_nx^n
\end{eqnarray*}

then

\begin{eqnarray*}
\end{eqnarray*}

so

\begin{eqnarray*}
Ly&=&(1+2x^2) \sum^\infty_{n=2}n(n-1)a_nx^{n-2}+ 6x \sum^\infty_{n=1}na_nx^{n-1} +2 \sum^\infty_{n=0}a_nx^n\\
&=&\sum_{n=2}^\infty n(n-1)a_nx^{n-2}+\sum_{n=0}^\infty \left[2n(n-1)+6n+2\right]a_nx^n\\
&=&\sum_{n=2}^\infty n(n-1)a_nx^{n-2}+2\sum_{n=0}^\infty(n+1)^2a_nx^n.
\end{eqnarray*}

To collect coefficients of $$x^n$$, we shift the summation index in the first sum. This yields

\begin{eqnarray*}
Ly=\sum_{n=0}^\infty(n+2)(n+1)a_{n+2}x^n+2\sum_{n=0}^\infty(n+1)^2a_nx^n =\sum_{n=0}^\infty b_nx^n,
\end{eqnarray*}

with

\begin{eqnarray*}
\end{eqnarray*}

To obtain solutions of \eqref{eq:3.2.17}, we set $$b_n=0$$ for $$n\ge0$$. This is equivalent to the recurrence relation

\begin{equation}\label{eq:3.2.18}
\end{equation}

Since the indices on the left and right differ by two, we write \eqref{eq:3.2.18} separately for $$n=2m$$ and $$n=2m+1$$, as in Example $$(3.2.1)$$. This yields

\begin{eqnarray}
\end{eqnarray}

Computing the coefficients of even powers of $$x$$ from \eqref{eq:3.2.19} yields

\begin{eqnarray*}
a_2&=&-{1\over1}a_0,\\
a_4&=&-{3\over2}a_2=\left(-{3\over2}\right)\left(-{1\over1}\right)a_0 ={1\cdot3\over1\cdot2}a_0,\\
a_6&=&-{5\over3}a_4= -{5\over3}\left(1\cdot3\over1\cdot2\right)a_0 =-{1\cdot3\cdot5\over1\cdot2\cdot3}a_0, \\
a_8&=&-{7\over4}a_6=-{7\over4} \left(-{1\cdot3\cdot5\over1\cdot2\cdot3}\right)a_0= {1\cdot3\cdot5\cdot7\over1\cdot2\cdot3\cdot4}a_0.\\
\end{eqnarray*}

In general,

\begin{equation}\label{eq:3.2.21}
\end{equation}

(Note that \eqref{eq:3.2.21} is correct for $$m=0$$ because we defined $$\prod_{j=1}^0b_j=1$$ for any $$b_j$$.)

Computing the coefficients of odd powers of $$x$$ from \eqref{eq:3.2.20} yields

\begin{eqnarray*}
a_3&=&-4\,{1\over3}a_1, \\
a_5&=&-4\,{2\over5}a_3=-4\,{2\over5}\left(-4{1\over3}\right)a_1 =4^2{1\cdot2\over3\cdot5}a_1, \\
a_7&=&-4\,{3\over7}a_5=-4\,{3\over7}\left(4^2{1\cdot2\over3\cdot5}\right)a_1=-4^3{1\cdot2\cdot3\over3\cdot5\cdot7}a_1,\\
a_9&=&-4\, {4\over9}a_7=-4\, {4\over9}\left(4^3{1\cdot2\cdot3\over3\cdot5\cdot7}\right)a_1=4^4{1\cdot2\cdot3\cdot4\over3\cdot5\cdot7\cdot9}a_1.
\end{eqnarray*}

In general,

\begin{equation}\label{eq:3.2.22}
\end{equation}

From \eqref{eq:3.2.21} and \eqref{eq:3.2.22},

\begin{eqnarray*}
y=a_0 \sum^\infty_{m=0}(-1)^m {\prod_{j=1}^m(2j-1)\over m!}x^{2m}
+a_1 \sum^\infty_{m=0}(-1)^m {4^mm!\over\prod_{j=1}^m(2j+1)} x^{2m+1}.
\end{eqnarray*}

is the power series in $$x$$ for the general solution of \eqref{eq:3.2.17}. Since $$P_0(x)=1+2x^2$$ has no real zeros, Theorem $$(2.1.1)$$ implies that every solution of \eqref{eq:3.2.17} is defined on $$(-\infty,\infty)$$. However, since $$P_0(\pm i/\sqrt2)=0$$, Theorem $$(3.2.1)$$ implies only that the power series converges in $$(-1/\sqrt2,1/\sqrt2)$$ for any choice of $$a_0$$ and $$a_1$$.

The results in Examples $$(3.2.1)$$ and $$(3.2.2)$$ are consequences of the following general theorem.

### Theorem $$\PageIndex{2}$$

The coefficients $$\{a_n\}$$ in any solution $$y=\sum_{n=0}^\infty a_n(x-x_0)^n$$ of

\begin{equation}\label{eq:3.2.23}
\left(1+\alpha(x-x_0)^2\right)y''+\beta(x-x_0) y'+\gamma y=0
\end{equation}

satisfy the recurrence relation

\begin{equation}\label{eq:3.2.24}
\end{equation}

where

\begin{equation}\label{eq:3.2.25}
p(n)=\alpha n(n-1) +\beta n+\gamma.
\end{equation}

Moreover, the coefficients of the even and odd powers of $$x-x_0$$ can be computed separately as

\begin{eqnarray}
\end{eqnarray}

where $$a_0$$ and $$a_1$$ are arbitrary.

Proof

Here

\begin{eqnarray*}
Ly=\left(1+\alpha(x-x_0\right)^2)y''+\beta(x-x_0) y'+\gamma y.
\end{eqnarray*}

If

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_n(x-x_0)^n,
\end{eqnarray*}

then

\begin{eqnarray*}
\end{eqnarray*}

Hence,

\begin{eqnarray*}
\begin{array}{ccl}
Ly&=&\displaystyle{\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}+
\sum_{n=0}^\infty \left[\alpha
n(n-1)
+\beta n+\gamma\right]a_n(x-x_0)^n}\\
&=&\displaystyle{\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}+\sum_{n=0}^\infty
p(n)a_n(x-x_0)^n},
\end{array}
\end{eqnarray*}

from \eqref{eq:3.2.25}. To collect coefficients of powers of $$x-x_0$$, we shift the summation index in the first sum. This yields

\begin{eqnarray*}
Ly=\sum_{n=0}^\infty \left[(n+2)(n+1)a_{n+2}+p(n)a_n\right](x-x_0)^n.
\end{eqnarray*}

Thus, $$Ly=0$$ if and only if

\begin{eqnarray*}
\end{eqnarray*}

which is equivalent to \eqref{eq:3.2.24}. Writing \eqref{eq:3.2.24} separately for the cases where $$n=2m$$ and $$n=2m+1$$ yields \eqref{eq:3.2.26} and \eqref{eq:3.2.27}.

### Example $$\PageIndex{3}$$

Find the power series in $$x-1$$ for the general solution of

\begin{equation}\label{eq:3.2.28}
(2+4x-2x^2)y''-12(x-1)y'-12y=0.
\end{equation}

We must first write the coefficient $$P_0(x)=2+4x-x^2$$ in powers of $$x-1$$. To do this, we write $$x=(x-1)+1$$ in $$P_0(x)$$ and then expand the terms, collecting powers of $$x-1$$; thus,

\begin{eqnarray*}
2+4x-2x^2&=&2+4[(x-1)+1]-2[(x-1)+1]^2\\
&=&4-2(x-1)^2.
\end{eqnarray*}

Therefore we can rewrite \eqref{eq:3.2.28} as

\begin{eqnarray*}
\left(4-2(x-1)^2\right)y''-12(x-1)y'-12y=0,
\end{eqnarray*}

or, equivalently,

\begin{eqnarray*}
\left(1-{1\over2}(x-1)^2\right)y''-3(x-1)y'-3y=0.
\end{eqnarray*}

This is of the form \eqref{eq:3.2.23} with $$\alpha=-1/2$$, $$\beta=-3$$, and $$\gamma=-3$$. Therefore, from \eqref{eq:3.2.25}

\begin{eqnarray*}
p(n)=-{n(n-1)\over2}-3n-3=-{(n+2)(n+3)\over2}.
\end{eqnarray*}

Hence, Theorem $$(3.2.2)$$ implies that

\begin{eqnarray*}
a_{2m+2}&=&-{p(2m)\over(2m+2)(2m+1)}a_{2m}\\
a_{2m+3}&=&-{p(2m+1)\over(2m+3)(2m+2)}a_{2m+1}\\
\end{eqnarray*}

We leave it to you to show that

\begin{eqnarray*}
\end{eqnarray*}

which implies that the power series in $$x-1$$ for the general solution of \eqref{eq:3.2.28} is

\begin{eqnarray*}
y=a_0\sum_{m=0}^\infty{2m+1\over2^m}(x-1)^{2m}+a_1\sum_{m=0}^\infty {m+1\over2^m}(x-1)^{2m+1}.
\end{eqnarray*}

In the examples considered so far we were able to obtain closed formulas for coefficients in the power series solutions. In some cases this is impossible, and we must settle for computing a finite number of terms in the series. The next example illustrates this with an initial value problem.

### Example $$\PageIndex{4}$$

Compute $$a_0$$, $$a_1$$, $$\dots$$, $$a_7$$ in the series solution $$y=\sum_{n=0}^\infty a_nx^n$$ of the initial value problem

\begin{equation}\label{eq:3.2.29}
\end{equation}

Since $$\alpha=2$$, $$\beta=10$$, and $$\gamma=8$$ in \eqref{eq:3.2.29},

\begin{eqnarray*}
p(n)=2n(n-1)+10n+8=2(n+2)^2.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
\end{eqnarray*}

Writing this equation separately for $$n=2m$$ and $$n=2m+1$$ yields

\begin{eqnarray}
\end{eqnarray}

Starting with $$a_0=y(0)=2$$, we compute $$a_2, a_4$$, and $$a_6$$ from \eqref{eq:3.2.30}:

\begin{eqnarray*}
a_2&=&-4\,{1\over1}2=-8,\\
a_4&=&-4\,{2\over3}(-8)={64\over3},\\
a_6&=&-4\,{3\over5}\left(64\over3\right)=-{256\over5}.
\end{eqnarray*}

Starting with $$a_1=y'(0)=-3$$, we compute $$a_3,a_5$$ and $$a_7$$ from \eqref{eq:3.2.31}:

\begin{eqnarray*}
a_3&=&-{3\over1}(-3)=9,\\
a_5&=&-{5\over2}9=-{45\over2},\\
a_7&=&-{7\over3}\left(-{45\over2}\right)={105\over2}.
\end{eqnarray*}

Therefore the solution of \eqref{eq:3.2.29} is

\begin{eqnarray*}
y=2-3x-8x^2+9x^3+{64\over3}x^4-{45\over2}x^5-{256\over5}x^6+{105\over2}x^7+\cdots\; .
\end{eqnarray*}

Computing coefficients recursively as in Example $$(3.2.4)$$ is tedious. We recommend that you do this kind of computation by writing a short program to implement the appropriate recurrence relation on a calculator or computer. You may wish to do this in
verifying examples and doing exercises (identified by the symbol \Cex) in this chapter that call for numerical computation of the coefficients in series solutions. We obtained the answers to these exercises by using software that can produce answers in the form of rational numbers. However, it's perfectly acceptable - and more practical - to get your answers in decimal form. You can always check them by converting our fractions to decimals.

If you're interested in actually using series to compute numerical approximations to solutions of a differential equation, then whether or not there's a simple closed form for the coefficients is essentially irrelevant. For computational purposes it's usually more efficient to start with the given coefficients $$a_0=y(x_0)$$ and $$a_1=y'(x_0)$$, compute $$a_2$$, $$\dots$$, $$a_N$$ recursively, and then compute approximate values of the solution from the Taylor polynomial

\begin{eqnarray*}
T_N(x)=\sum_{n=0}^Na_n(x-x_0)^n.
\end{eqnarray*}

The trick is to decide how to choose $$N$$ so the approximation $$y(x)\approx T_N(x)$$ is sufficiently accurate on the subinterval of the interval of convergence that you're interested in. In the computational exercises in this and the next two sections, you will often be asked to obtain the solution of a given problem by numerical integration with software of your choice (see Section $$3.1)$$ for a brief discussion of one such method), and to compare the solution obtained in this way with the approximations obtained with $$T_N$$ for various values of $$N$$. This is a typical textbook kind of exercise, designed to give you insight into how the accuracy of the approximation $$y(x)\approx T_N(x)$$ behaves as a function of $$N$$ and the interval that you're working on. In real life, you would choose one or the other of the two methods (numerical integration or series solution). If you choose the method of series solution, then a practical procedure for determining a suitable value of $$N$$ is to continue increasing $$N$$ until the maximum of $$|T_N-T_{N-1}|$$ on the interval of interest is within the margin of error that you're willing to accept.

In doing computational problems that call for numerical solution of differential equations you should choose the most accurate numerical integration procedure your software supports, and experiment with the step size until you're confident that the numerical results are sufficiently accurate for the problem at hand.