$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 3.1: Review of Power Series

• • Contributed by William F. Trench
• Andrew G. Cowles Distinguished Professor Emeritus (Mathamatics) at Trinity University
$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

## Power Series

Many applications give rise to differential equations with solutions that can't be expressed in terms of elementary functions such as polynomials, rational functions, exponential and logarithmic functions, and trigonometric functions. The solutions of some of the most important of these equations can be expressed in terms of power series. We'll study such equations in this chapter. In this section we review relevant properties of power series. We'll omit proofs, which can be found in any standard calculus text.

### Theorem $$\PageIndex{1}$$

An infinite series of the form

\begin{equation}\label{eq:3.1.1}
\sum_{n=0}^\infty a_n(x-x_0)^n,
\end{equation}

where $$x_0$$ and $$a_0$$, $$a_1,$$ $$\dots$$, $$a_n,$$ $$\dots$$ are constants, is called a $$\textcolor{blue}{\mbox{power series in}}$$ $$x-x_0.$$ We say that the power series \eqref{eq:3.1.1} $$\textcolor{blue}{\mbox{converges}}$$ for a given $$x$$ if the limit

\begin{eqnarray*}
\lim_{N\to\infty} \sum_{n=0}^Na_n(x-x_0)^n
\end{eqnarray*}

exists; otherwise, we say that the power series $$\textcolor{blue}{\mbox{diverges}}$$ for the given $$x$$.

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

A power series in $$x-x_0$$ must converge if $$x=x_0$$, since the positive powers of $$x-x_0$$ are all zero in this case. This may be the only value of $$x$$ for which the power series converges. However, the next theorem shows that if the power series converges for some $$x\ne x_0$$ then the set of all values of $$x$$ for which it converges forms an interval.

### Theorem $$\PageIndex{2}$$

For any power series

\begin{eqnarray*}
\sum_{n=0}^\infty a_n(x-x_0)^n,
\end{eqnarray*}

exactly one of the these statements is true:

(i) The power series converges only for $$x=x_0.$$

(ii) The power series converges for all values of $$x.$$

(iii) There's a positive number $$R$$ such that the power series converges if $$|x-x_0|<R$$ and diverges if $$|x-x_0|>R.$$

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

In case (iii), we say that $$R$$ is the $$\textcolor{blue}{\mbox{radius of convergence}}$$ of the power series. For convenience, we include the other two cases in this definition by defining $$R=0$$ in case (i) and $$R=\infty$$ in case (ii). We define the $$\textcolor{blue}{\mbox{open interval of convergence}}$$ of $$\sum_{n=0}^\infty a_n(x-x_0)^n$$ to be

\begin{eqnarray*}
\end{eqnarray*}

If $$R$$ is finite, no general statement can be made concerning convergence at the endpoints $$x=x_0\pm R$$ of the open interval of convergence; the series may converge at one or both points, or diverge at both.

Recall from calculus that a series of constants $$\sum_{n=0}^\infty\alpha_n$$ is said to $$\textcolor{blue}{\mbox{converge absolutely}}$$ if the series of absolute values $$\sum_{n=0}^\infty|\alpha_n|$$ converges. It can be shown that a power series $$\sum_{n=0}^\infty a_n(x-x_0)^n$$ with a positive radius of convergence $$R$$ converges absolutely in its open interval of convergence; that is, the series

\begin{eqnarray*}
\sum_{n=0}^\infty |a_n||x-x_0|^n
\end{eqnarray*}

of absolute values converges if $$|x-x_0|<R$$. However, if $$R<\infty$$, the series may fail to converge absolutely at an endpoint $$x_0\pm R$$, even if it converges there.

The next theorem provides a useful method for determining the radius of convergence of a power series. It's derived in calculus by applying the ratio test to the corresponding series of absolute values. For related theorems see Exercises $$(3.1E.2)$$ and $$(3.1E.4)$$.

### Theorem $$\PageIndex{3}$$

Suppose there's an integer $$N$$ such that $$a_n\ne0$$ if $$n\ge N$$ and

\begin{eqnarray*}
\lim_{n\to\infty}\left|a_{n+1}\over a_n\right|=L,
\end{eqnarray*}

where $$0\le L\le\infty.$$ Then the radius of convergence of $$\sum_{n=0}^\infty a_n(x-x_0)^n$$ is $$R=1/ L,$$ which should be interpreted to mean that $$R=0$$ if $$L=\infty,$$ or $$R=\infty$$ if $$L=0.$$

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

### Example $$\PageIndex{1}$$

Find the radius of convergence of the series:

(a) $$\sum_{n=0}^\infty n!x^n$$

(b) $$\sum_{n=10}^\infty (-1)^n {x^n\over n!}$$

(c) $$\sum_{n=0}^\infty 2^nn^2 (x-1)^n$$

(a) Here $$a_n=n!$$, so

\begin{eqnarray*}
\lim_{n\to\infty}\left|a_{n+1}\over a_n\right|=\lim_{n\to\infty} {(n+1)!\over n!}=\lim_{n\to\infty}(n+1)=\infty.
\end{eqnarray*}

Hence, $$R=0$$.

(b) Here $$a_n=(1)^n/n!$$ for $$n\ge N=10$$, so

\begin{eqnarray*}
\lim_{n\to\infty}\left|a_{n+1}\over a_n\right|=\lim_{n\to\infty} {n!\over (n+1)!}=\lim_{n\to\infty}{1\over n+1}=0.
\end{eqnarray*}

Hence, $$R=\infty$$.

(c) Here $$a_n=2^nn^2$$, so

\begin{eqnarray*}
\lim_{n\to\infty}\left|a_{n+1}\over a_n\right|=\lim_{n\to\infty} {2^{n+1}(n+1)^2\over2^nn^2}=2\lim_{n\to\infty}\left(1+{1\over n}\right)^2=2.
\end{eqnarray*}

Hence, $$R=1/2$$.

## Taylor Series

If a function $$f$$ has derivatives of all orders at a point $$x=x_0$$, then the Taylor series of $$f$$ about $$x_0$$ is defined by

\begin{eqnarray*}
\sum_{n=0}^\infty {f^{(n)}(x_0)\over n!}(x-x_0)^n.
\end{eqnarray*}

In the special case where $$x_0=0$$, this series is also called the Maclaurin series of $$f$$.

Taylor series for most of the common elementary functions converge to the functions on their open intervals of convergence. For example, you are probably familiar with the following Maclaurin series:

\begin{eqnarray}
\sin x&=&\sum_{n=0}^\infty(-1)^n {x^{2n+1}\over(2n+1)!},\quad -\infty<x<\infty, \label{eq:3.1.3}\\ %dummy \eqref{eq:3.1.3}
\cos x&=&\sum_{n=0}^\infty(-1)^n {x^{2n}\over(2n)!},\quad -\infty<x<\infty, \label{eq:3.1.4}\\ %dummy \eqref{eq:3.1.4}
\end{eqnarray}

## Differentiation of Power Series

A power series with a positive radius of convergence defines a function

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty a_n(x-x_0)^n
\end{eqnarray*}

on its open interval of convergence. We say that the series $$\textcolor{blue}{\mbox{represents}}$$ $$f$$ on the open interval of convergence. A function $$f$$ represented by a power series may be a familiar elementary function as in \eqref{eq:3.1.2} to \eqref{eq:3.1.5}; however, it often happens that $$f$$ isn't a familiar function, so the series actually $$\textcolor{blue}{\mbox{defines}}$$ $$f$$.

The next theorem shows that a function represented by a power series has derivatives of all orders on the open interval of convergence of the power series, and provides power series representations of the derivatives.

### Theorem $$\PageIndex{4}$$

A power series

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty a_n(x-x_0)^n
\end{eqnarray*}

with positive radius of convergence $$R$$ has derivatives of all orders in its open interval of convergence, and successive derivatives can be obtained by repeatedly differentiating term by term; that is,

\begin{eqnarray}
f'(x)&=&\displaystyle{\sum_{n=1}^\infty na_n(x-x_0)^{n-1}}\label{eq:3.1.6},\\
f''(x)&=&\displaystyle{\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}},\label{eq:3.1.7}\\
&\vdots&\nonumber\\
f^{(k)}(x)&=&\displaystyle{\sum_{n=k}^\infty n(n-1)\cdots(n-k+1)a_n(x-x_0)^{n-k}}\label{eq:3.1.8}.
\end{eqnarray}

Moreover, all of these series have the same radius of convergence $$R.$$

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

### Example $$\PageIndex{2}$$:

Let $$f(x)=\sin x$$. From \eqref{eq:3.1.3},

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty(-1)^n {x^{2n+1}\over(2n+1)!}.
\end{eqnarray*}

From \eqref{eq:3.1.6},

\begin{eqnarray*}
f'(x)=\sum_{n=0}^\infty(-1)^n{d\over dx}\left[x^{2n+1}\over(2n+1)!\right]= \sum_{n=0}^\infty(-1)^n {x^{2n}\over(2n)!},
\end{eqnarray*}

which is the series \eqref{eq:3.1.4} for $$\cos x$$.

## Uniqueness of Power Series

The next theorem shows that if $$f$$ is $$\textcolor{blue}{\mbox{defined}}$$ by a power series in $$x-x_0$$ with a positive radius of convergence, then the power series is the Taylor series of $$f$$ about $$x_0$$.

### Theorem $$\PageIndex{5}$$

If the power series

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty a_n(x-x_0)^n
\end{eqnarray*}

has a positive radius of convergence, then

\begin{equation} \label{eq:3.1.9}
a_n={f^{(n)}(x_0)\over n!};
\end{equation}

that is, $$\sum_{n=0}^\infty a_n(x-x_0)^n$$ is the Taylor series of $$f$$ about $$x_0$$.

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

This result can be obtained by setting $$x=x_0$$ in \eqref{eq:3.1.8}, which yields

\begin{eqnarray*}
f^{(k)}(x_0)=k(k-1)\cdots1\cdot a_k=k!a_k.
\end{eqnarray*}

This implies that

\begin{eqnarray*}
a_k={f^{(k)}(x_0)\over k!}.
\end{eqnarray*}

Except for notation, this is the same as \eqref{eq:3.1.9}.

The next theorem lists two important properties of power series that follow from Theorem $$(3.1.5)$$.

### Theorem $$\PageIndex{6}$$

(a) If

\begin{eqnarray*}
\sum_{n=0}^\infty a_n(x-x_0)^n=\sum_{n=0}^\infty b_n(x-x_0)^n
\end{eqnarray*}

for all $$x$$ in an open interval that contains $$x_0,$$ then $$a_n=b_n$$ for $$n=0$$, $$1$$, $$2$$, $$\dots$$.

(b) If

\begin{eqnarray*}
\sum_{n=0}^\infty a_n(x-x_0)^n=0
\end{eqnarray*}

for all $$x$$ in an open interval that contains $$x_0,$$ then $$a_n=0$$ for $$n=0$$, $$1$$, $$2$$, $$\dots$$.

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

To obtain part (a) we observe that the two series represent the same function $$f$$ on the open interval; hence, Theorem $$(3.1.5)$$ implies that

\begin{eqnarray*}
\end{eqnarray*}

Part (b) can be obtained from part (a) by taking $$b_n=0$$ for $$n=0$$, $$1$$, $$2$$, $$\dots$$.

## Taylor Polynomials

If $$f$$ has $$N$$ derivatives at a point $$x_0$$, we say that

\begin{eqnarray*}
T_N(x)=\sum_{n=0}^N{f^{(n)}(x_0)\over n!}(x-x_0)^n
\end{eqnarray*}

is the $$N$$th Taylor polynomial of $$f$$ about $$x_0$$. This definition and Theorem $$(3.1.5)$$ imply that if

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty a_n(x-x_0)^n,
\end{eqnarray*}

where the power series has a positive radius of convergence, then the Taylor polynomials of $$f$$ about $$x_0$$ are given by

\begin{eqnarray*}
T_N(x)=\sum_{n=0}^N a_n(x-x_0)^n.
\end{eqnarray*}

In numerical applications, we use the Taylor polynomials to approximate $$f$$ on subintervals of the open interval of convergence of the power series. For example, \eqref{eq:3.1.2} implies that the Taylor polynomial $$T_N$$ of $$f(x)=e^x$$ is

\begin{eqnarray*}
T_N(x)=\sum_{n=0}^N{x^n\over n!}.
\end{eqnarray*}

The solid curve in Figure $$3.1.1$$ is the graph of $$y=e^x$$ on the interval $$[0,5]$$. The dotted curves in Figure $$3.1.1$$ are the graphs of the Taylor polynomials $$T_1$$, $$\dots$$, $$T_6$$ of $$y=e^x$$ about $$x_0=0$$. From this figure, we conclude that the accuracy of the approximation of $$y=e^x$$ by its Taylor polynomial $$T_N$$ improves as $$N$$ increases.

### Figure: $$3.1.1$$

Approximation of $$y=e^x$$ by Taylor polynomials about $$x=0$$ ## Shifting the Summation Index

In Theorem $$(3.1.1)$$ of a power series in $$x-x_0$$, the $$n$$th term is a constant multiple of $$(x-x_0)^n$$. This isn't true in \eqref{eq:3.1.6}, \eqref{eq:3.1.7}, and \eqref{eq:3.1.8}, where the general terms are constant multiples of $$(x-x_0)^{n-1}$$, $$(x-x_0)^{n-2}$$, and $$(x-x_0)^{n-k}$$, respectively. However, these series can all be rewritten so that their $$n$$th terms are constant multiples of $$(x-x_0)^n$$. For example, letting $$n=k+1$$ in the series in \eqref{eq:3.1.6} yields

\begin{equation} \label{eq:3.1.10}
f'(x)=\sum_{k=0}^\infty (k+1)a_{k+1}(x-x_0)^k,
\end{equation}

where we start the new summation index $$k$$ from zero so that the first term in \eqref{eq:3.1.10} (obtained by setting $$k=0$$) is the same as the first term in \eqref{eq:3.1.6} (obtained by setting $$n=1$$). However, the sum of a series is independent of the symbol used to denote the summation index, just as the value of a definite integral is independent of the symbol used to denote the variable of integration. Therefore we can replace $$k$$ by $$n$$ in \eqref{eq:3.1.10} to obtain

\begin{equation} \label{eq:3.1.11}
f'(x)=\sum_{n=0}^\infty (n+1)a_{n+1}(x-x_0)^n,
\end{equation}

where the general term is a constant multiple of $$(x-x_0)^n$$.

It isn't really necessary to introduce the intermediate summation index $$k$$. We can obtain \eqref{eq:3.1.11} directly from \eqref{eq:3.1.6} by replacing $$n$$ by $$n+1$$ in the general term of \eqref{eq:3.1.6} and subtracting $$1$$ from the lower limit of \eqref{eq:3.1.6}. More generally, we use the following procedure for shifting indices.

### Shifting the Summation Index in a Power Series

For any integer $$k$$, the power series

\begin{eqnarray*}
\sum_{n=n_0}^\infty b_n(x-x_0)^{n-k}
\end{eqnarray*}

can be rewritten as

\begin{eqnarray*}
\sum_{n=n_0-k}^\infty b_{n+k}(x-x_0)^n;
\end{eqnarray*}

that is, replacing $$n$$ by $$n+k$$ in the general term and subtracting $$k$$ from the lower limit of summation leaves the series unchanged.

### Example $$\PageIndex{3}$$

Rewrite the following power series from \eqref{eq:3.1.7} and \eqref{eq:3.1.8} so that the general term in each is a constant multiple of $$(x-x_0)^n$$:

(a) $$\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}$$

(b) $$\sum_{n=k}^\infty n(n-1)\cdots(n-k+1)a_n(x-x_0)^{n-k}.$$

(a) Replacing $$n$$ by $$n+2$$ in the general term and subtracting $$2$$ from the lower limit of summation yields

\begin{eqnarray*}
\sum_{n=2}^\infty n(n-1)a_n(x-x_0)^{n-2}= \sum_{n=0}^\infty (n+2)(n+1)a_{n+2}(x-x_0)^n.
\end{eqnarray*}

(b) Replacing $$n$$ by $$n+k$$ in the general term and subtracting $$k$$ from the lower limit of summation yields

\begin{eqnarray*}
\sum_{n=k}^\infty n(n-1)\cdots(n-k+1)a_n(x-x_0)^{n-k}= \sum_{n=0}^\infty (n+k)(n+k-1)\cdots(n+1)a_{n+k}(x-x_0)^n.
\end{eqnarray*}

### Example $$\PageIndex{4}$$

Given that

\begin{eqnarray*}
f(x)=\sum_{n=0}^\infty a_nx^n,
\end{eqnarray*}

write the function $$xf''$$ as a power series in which the general term is a constant multiple of $$x^n$$.

From Theorem $$(3.1.4)$$ with $$x_0=0$$,

\begin{eqnarray*}
f''(x)=\sum_{n=2}^\infty n(n-1)a_nx^{n-2}.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
xf''(x)=\sum_{n=2}^\infty n(n-1)a_nx^{n-1}.
\end{eqnarray*}

Replacing $$n$$ by $$n+1$$ in the general term and subtracting $$1$$ from the lower limit of summation yields

\begin{eqnarray*}
xf''(x)=\sum_{n=1}^\infty (n+1)na_{n+1}x^n.
\end{eqnarray*}

We can also write this as

\begin{eqnarray*}
xf''(x)=\sum_{n=0}^\infty (n+1)na_{n+1}x^n,
\end{eqnarray*}

since the first term in this last series is zero. (We'll see later that sometimes it's useful to include zero terms at the beginning of a series.)

## Linear Combinations of Power Series

If a power series is multiplied by a constant, then the constant can be placed inside the summation; that is,

\begin{eqnarray*}
c\sum_{n=0}^\infty a_n(x-x_0)^n=\sum_{n=0}^\infty ca_n(x-x_0)^n.
\end{eqnarray*}

Two power series

\begin{eqnarray*}
\end{eqnarray*}

with positive radii of convergence can be added term by term at points common to their open intervals of convergence; thus, if the first series converges for $$|x-x_0|<R_1$$ and the second converges for $$|x-x_0|<R_2$$, then

\begin{eqnarray*}
f(x)+g(x)=\sum_{n=0}^\infty(a_n+b_n)(x-x_0)^n
\end{eqnarray*}

for $$|x-x_0|<R$$, where $$R$$ is the smaller of $$R_1$$ and $$R_2$$. More generally, linear combinations of power series can be formed term by term; for example,

\begin{eqnarray*}
c_1f(x)+c_2f(x)=\sum_{n=0}^\infty(c_1a_n+c_2b_n)(x-x_0)^n.
\end{eqnarray*}

### Example $$\PageIndex{5}$$

Find the Maclaurin series for $$\cosh x$$ as a linear combination of the Maclaurin series for $$e^x$$ and $$e^{-x}$$.

By definition,

\begin{eqnarray*}
\cosh x={1\over2}e^x+{1\over2}e^{-x}.
\end{eqnarray*}

Since

\begin{eqnarray*}
\end{eqnarray*}

it follows that

\begin{equation} \label{eq:3.1.12}
\cosh x=\sum_{n=0}^\infty {1\over2}[1+(-1)^n]{x^n\over n!}.
\end{equation}

Since

\begin{eqnarray*}
{1\over2}[1+(-1)^n]=\left\{\begin{array}{cl}1&\mbox{ if } n=2m,\mbox{
an even integer},\\ 0&\mbox{ if }n=2m+1,\mbox{ an odd integer},
\end{array}\right.
\end{eqnarray*}

we can rewrite \eqref{eq:3.1.12} more simply as

\begin{eqnarray*}
\cosh x=\sum_{m=0}^\infty{x^{2m}\over(2m)!}.
\end{eqnarray*}

This result is valid on $$(-\infty,\infty)$$, since this is the open interval of convergence of the Maclaurin series for $$e^x$$ and
$$e^{-x}$$.

### Example $$\PageIndex{6}$$

Suppose

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_n x^n
\end{eqnarray*}

on an open interval $$I$$ that contains the origin.

(a) Express

\begin{eqnarray*}
(2-x)y''+2y
\end{eqnarray*}

as a power series in $$x$$ on $$I$$.

(b) Use the result of part (a) to find necessary and sufficient conditions on the coefficients $$\{a_n\}$$ for $$y$$ to be a solution of the homogeneous equation

\begin{equation} \label{eq:3.1.13}
(2-x)y''+2y=0
\end{equation}

on $$I$$.

(a) From \eqref{eq:3.1.7} with $$x_0=0$$,

\begin{eqnarray*}
y''=\sum_{n=2}^\infty n(n-1)a_nx^{n-2}.
\end{eqnarray*}

Therefore

\begin{equation} \label{eq:3.1.14}
\begin{array}{rcl}
(2-x)y''+2y&=&2y''-xy'+2y\\
&=&\displaystyle{\sum_{n=2}^\infty 2n(n-1)a_nx^{n-2} -\sum_{n=2}^\infty n(n-1)a_nx^{n-1} +\sum_{n=0}^\infty 2a_n x^n}.
\end{array}
\end{equation}

To combine the three series we shift indices in the first two to make their general terms constant multiples of $$x^n$$; thus,

\begin{equation} \label{eq:3.1.15}
\sum_{n=2}^\infty 2n(n-1)a_nx^{n-2}=\sum_{n=0}^\infty2(n+2)(n+1)a_{n+2}x^n
\end{equation}

and

\begin{equation} \label{eq:3.1.16}
\sum_{n=2}^\infty n(n-1)a_nx^{n-1}=\sum_{n=1}^\infty(n+1)na_{n+1}x^n =\sum_{n=0}^\infty(n+1)na_{n+1}x^n,
\end{equation}

where we added a zero term in the last series so that when we substitute from \eqref{eq:3.1.15} and \eqref{eq:3.1.16} into \eqref{eq:3.1.14} all three series will start with $$n=0$$; thus,

\begin{equation} \label{eq:3.1.17}
(2-x)y''+2y=\sum_{n=0}^\infty [2(n+2)(n+1)a_{n+2}-(n+1)na_{n+1}+2a_n]x^n.
\end{equation}

(b) From \eqref{eq:3.1.17} we see that $$y$$ satisfies \eqref{eq:3.1.13} on $$I$$ if

\begin{equation} \label{eq:3.1.18}
\end{equation}

Conversely, Theorem $$(3.1.6)$$ part (b) implies that if $$y=\sum_{n=0}^\infty a_nx^n$$ satisfies \eqref{eq:3.1.13} on $$I$$, then \eqref{eq:3.1.18} holds.

### Example $$\PageIndex{7}$$

Suppose

\begin{eqnarray*}
y=\sum_{n=0}^\infty a_n (x-1)^n
\end{eqnarray*}

on an open interval $$I$$ that contains $$x_0=1$$. Express the function

\begin{equation} \label{eq:3.1.19}
(1+x)y''+2(x-1)^2y'+3y
\end{equation}

as a power series in $$x-1$$ on $$I$$.

Since we want a power series in $$x-1$$, we rewrite the coefficient of $$y''$$ in \eqref{eq:3.1.19} as $$1+x=2+(x-1)$$, so \eqref{eq:3.1.19} becomes

\begin{eqnarray*}
2y''+(x-1)y''+2(x-1)^2y'+3y.
\end{eqnarray*}

From \eqref{eq:3.1.6} and \eqref{eq:3.1.7} with $$x_0=1$$,

\begin{eqnarray*}
\end{eqnarray*}

Therefore

\begin{eqnarray*}
2y ''&=&\sum_{n=2}^\infty 2n(n-1)a_n(x-1)^{n-2},\\
(x-1)y ''&=&\sum_{n=2}^\infty n(n-1)a_n(x-1)^{n-1},\\
2(x-1)^2y'&=&\sum_{n=1}^\infty2na_n(x-1)^{n+1},\\
3y&=&\sum_{n=0}^\infty 3a_n (x-1)^n.
\end{eqnarray*}

Before adding these four series we shift indices in the first three so that their general terms become constant multiples of $$(x-1)^n$$. This yields

\begin{eqnarray}
2y ''&=&\sum_{n=0}^\infty 2(n+2)(n+1)a_{n+2}(x-1)^n,\label{eq:3.1.20}\\
(x-1)y''&=&\sum_{n=0}^\infty (n+1)na_{n+1}(x-1)^n, \label{eq:3.1.21}\\
2(x-1)^2y'&=&\sum_{n=1}^\infty 2(n-1)a_{n-1}(x-1)^n,\label{eq:3.1.22}\\ %dummy \eqref{eq:3.1.22}
3y&=&\sum_{n=0}^\infty 3a_n (x-1)^n, \label{eq:3.1.23}
\end{eqnarray}

where we added initial zero terms to the series in \eqref{eq:3.1.21} and \eqref{eq:3.1.22}. Adding \eqref{eq:3.1.20} to \eqref{eq:3.1.23} yields

\begin{eqnarray*}
(1+x)y''+2(x-1)^2y'+3y&=&2y''+(x-1)y''+2(x-1)^2y'+3y\\
&=&\sum_{n=0}^\infty b_n (x-1)^n,
\end{eqnarray*}

where

\begin{eqnarray}
b_0&=&4a_2+3a_0, \label{eq:3.1.24}\\
b_n&=&2(n+2)(n+1)a_{n+2}+(n+1)na_{n+1}+2(n-1)a_{n-1}+3a_n,\, n\ge1\label{eq:3.1.25}.
\end{eqnarray}

The formula \eqref{eq:3.1.24} for $$b_0$$ can't be obtained by setting $$n=0$$ in \eqref{eq:3.1.25}, since the summation in \eqref{eq:3.1.22} begins with $$n=1$$, while those in \eqref{eq:3.1.20}, \eqref{eq:3.1.21}, and \eqref{eq:3.1.23} begin with $$n=0$$.