$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 3.5: The Method of Frobenius I

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

## The Method of Frobenius

In this section we begin to study series solutions of a homogeneous linear second order differential equation with a regular singular point at $$x_0=0$$, so it can be written as

\label{eq:3.5.1}
x^2A(x)y''+xB(x)y'+C(x)y=0,

where $$A$$, $$B$$, $$C$$ are polynomials and $$A(0)\ne0$$.

We'll see that \eqref{eq:3.5.1} always has at least one solution of the form

\begin{eqnarray*}
y=x^r\sum_{n=0}^\infty a_nx^n
\end{eqnarray*}

where $$a_0\ne0$$ and $$r$$ is a suitably chosen number. The method we will use to find solutions of this form and other forms that we'll encounter in the next two sections is called the method of Frobenius, and we'll call them Frobenius solutions.

It can be shown that the power series $$\sum_{n=0}^\infty a_nx^n$$ in a Frobenius solution of \eqref{eq:3.5.1} converges on some open interval $$(-\rho,\rho)$$, where $$0<\rho\le\infty$$. However, since $$x^r$$ may be complex for negative $$x$$ or undefined if $$x=0$$, we'll consider solutions defined for positive values of $$x$$. Easy modifications of our results yield solutions defined for negative values of $$x$$. (Exercise $$(3.5E.54)$$).

We'll restrict our attention to the case where $$A$$, $$B$$, and $$C$$ are polynomials of degree not greater than two, so \eqref{eq:3.5.1} becomes

\label{eq:3.5.2}
x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y=0,

where $$\alpha_i$$, $$\beta_i$$, and $$\gamma_i$$ are real constants and $$\alpha_0\ne0$$. Most equations that arise in applications can be written this way. Some examples are

\begin{eqnarray*}
\alpha x^2y''+\beta xy'+\gamma y&=&0 \quad \mbox{(Euler's equation)},\\
x^2y''+xy'+(x^2-\nu^2)y&=&0 \quad \mbox{ (Bessel's equation)},\\
\mbox{and}\\
xy''+(1-x)y'+\lambda y&=&0, \quad \mbox{ (Laguerre's equation) },
\end{eqnarray*}

where we would multiply the last equation through by $$x$$ to put it in the form \eqref{eq:3.5.2}. However, the method of Frobenius can be extended to the case where $$A$$, $$B$$, and $$C$$ are functions that can be represented by power series in $$x$$ on some interval that contains zero, and $$A_0(0)\ne0$$ (Exercises $$(3.5E.57)$$ and $$(3.5E.38)$$).

The next two theorems will enable us to develop systematic methods for finding Frobenius solutions of \eqref{eq:3.5.2}.

Theorem $$\PageIndex{1}$$

Let

\begin{eqnarray*}
Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y,
\end{eqnarray*}

and define

\begin{eqnarray*}
p_0(r)&=&\alpha_0r(r-1)+\beta_0r+\gamma_0,\\
p_1(r)&=&\alpha_1r(r-1)+\beta_1r+\gamma_1,\\
p_2(r)&=&\alpha_2r(r-1)+\beta_2r+\gamma_2.\\
\end{eqnarray*}

Suppose the series

\label{eq:3.5.3}
y=\sum_{n=0}^\infty a_nx^{n+r}

converges on $$(0,\rho)$$. Then

\label{eq:3.5.4}
Ly=\sum_{n=0}^\infty b_nx^{n+r}

on $$(0,\rho),$$ where

\label{eq:3.5.5}
\begin{array}{ccl}
b_0&=&p_0(r)a_0,\\
b_1&=&p_0(r+1)a_1+p_1(r)a_0,\\
\end{array}

Proof

We begin by showing that if $$y$$ is given by \eqref{eq:3.5.3} and $$\alpha$$, $$\beta$$, and $$\gamma$$ are constants, then

\label{eq:3.5.6}
\alpha x^2y''+\beta xy'+\gamma y= \sum_{n=0}^\infty p(n+r)a_nx^{n+r},

where

\begin{eqnarray*}
p(r)=\alpha r(r-1)+\beta r +\gamma.
\end{eqnarray*}

Differentiating (3) twice yields

\label{eq:3.5.7}
y'=\sum_{n=0}^\infty (n+r)a_nx^{n+r-1}

and

\label{eq:3.5.8}
y''=\sum_{n=0}^\infty (n+r)(n+r-1)a_nx^{n+r-2}.

Multiplying \eqref{eq:3.5.7} by $$x$$ and \eqref{eq:3.5.8} by $$x^2$$ yields

\begin{eqnarray*}
xy'=\sum_{n=0}^\infty (n+r)a_nx^{n+r}
\end{eqnarray*}

and

\begin{eqnarray*}
x^2y''=\sum_{n=0}^\infty (n+r)(n+r-1)a_nx^{n+r}.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
\alpha x^2y''+\beta xy'+\gamma y &=&\sum_{n=0}^\infty\left[\alpha(n+r)(n+r-1)+\beta(n+r)+\gamma\right]a_n x^{n+r}\\
&=&\sum_{n=0}^\infty p(n+r)a_nx^{n+r},
\end{eqnarray*}

which proves \eqref{eq:3.5.6}.

Multiplying \eqref{eq:3.5.6} by $$x$$ yields

\label{eq:3.5.9}
x(\alpha x^2y''+\beta xy'+\gamma y)=\sum_{n=0}^\infty p(n+r) a_nx^{n+r+1}= \sum_{n=1}^\infty p(n+r-1)a_{n-1}x^{n+r}.

Multiplying \eqref{eq:3.5.6} by $$x^2$$ yields

\label{eq:3.5.10}
x^2(\alpha x^2y''+\beta xy'+\gamma y)=\sum_{n=0}^\infty p(n+r)a_nx^{n+r+2}= \sum_{n=2}^\infty p(n+r-2)a_{n-2}x^{n+r}.

To use these results, we rewrite

\begin{eqnarray*}
Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y
\end{eqnarray*}

as

\label{eq:3.5.11}
\begin{array}{ccl}
Ly&=&\left(\alpha_0x^2y''+\beta_0xy' +\gamma_0y\right) + x\left(\alpha_1x^2y''+\beta_1xy'+\gamma_1y\right) \\
&&+x^2\left(\alpha_2x^2y''+\beta_2xy'+\gamma_2y\right).
\end{array}

From \eqref{eq:3.5.6} with $$p=p_0$$,

\begin{eqnarray*}
\alpha_0x^2y''+\beta_0xy'+\gamma_0y=\sum_{n=0}^\infty p_0(n+r)a_nx^{n+r}.
\end{eqnarray*}

From \eqref{eq:3.5.9} with $$p=p_1$$,

\begin{eqnarray*}
x\left(\alpha_1x^2y''+\beta_1xy'+\gamma_1y\right)=\sum_{n=1}^\infty p_1(n+r-1)a_{n-1}x^{n+r}.
\end{eqnarray*}

From \eqref{eq:3.5.10} with $$p=p_2$$,

\begin{eqnarray*}
x^2\left(\alpha_2x^2y''+\beta_2xy'+\gamma_2y\right)=\sum_{n=2}^\infty p_2(n+r-2)a_{n-2}x^{n+r}.
\end{eqnarray*}

Therefore we can rewrite \eqref{eq:3.5.11} as

\begin{eqnarray*}
Ly=\sum_{n=0}^\infty p_0(n+r)a_nx^{n+r}+ \sum_{n=1}^\infty p_1(n+r-1)a_{n-1}x^{n+r}\\
+\sum_{n=2}^\infty p_2(n+r-2)a_{n-2}x^{n+r},
\end{eqnarray*}

or

\begin{eqnarray*}
Ly&=& p_0(r)a_0x^r+\left[p_0(r+1)a_1+p_1(r)a_2\right]x^{r+1}\\
&& +\sum_{n=2}^\infty\left[p_0(n+r)a_n+p_1(n+r-1)a_{n-1} +p_2(n+r-2)a_{n-2}\right]x^{n+r},
\end{eqnarray*}

which implies \eqref{eq:3.5.4} with $$\{b_n\}$$ defined as in \eqref{eq:3.5.5}.

Theorem $$\PageIndex{2}$$

Let

\begin{eqnarray*}
Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y,
\end{eqnarray*}

where $$\alpha_0\ne0,$$ and define

\begin{eqnarray*}
p_0(r)&=&\alpha_0r(r-1)+\beta_0r+\gamma_0,\\
p_1(r)&=&\alpha_1r(r-1)+\beta_1r+\gamma_1,\\
p_2(r)&=&\alpha_2r(r-1)+\beta_2r+\gamma_2.\\
\end{eqnarray*}

Suppose $$r$$ is a real number such that $$p_0(n+r)$$ is nonzero for all positive integers $$n.$$ Define

\label{eq:3.5.12}
\begin{array}{ccl}
a_0(r)&=&1,\\
a_1(r)&=&-\displaystyle{p_1(r)\over p_0(r+1)},\\
\end{array}

Then the Frobenius series

\label{eq:3.5.13}
y(x,r)=x^r\sum_{n=0}^\infty a_n(r)x^n

converges and satisfies

\label{eq:3.5.14}
Ly(x,r)=p_0(r)x^r

on the interval $$(0,\rho),$$ where $$\rho$$ is the distance from the origin to the nearest zero of $$A(x)=\alpha_0+\alpha_1 x+\alpha_2 x^2$$ in the complex plane. (If $$A$$ is constant, then $$\rho=\infty$$.)

Proof

If $$\{a_n(r)\}$$ is determined by the recurrence relation \eqref{eq:3.5.12} then substituting $$a_n=a_n(r)$$ into \eqref{eq:3.5.5} yields $$b_0=p_0(r)$$ and $$b_n=0$$ for $$n\ge1$$, so \eqref{eq:3.5.4} reduces to \eqref{eq:3.5.14}. We omit the proof that the series \eqref{eq:3.5.13} converges on $$(0,\rho)$$.

If $$\alpha_i=\beta_i=\gamma_i=0$$ for $$i=1$$, $$2,$$ then $$Ly=0$$ reduces to the Euler equation

\begin{eqnarray*}
\alpha_0x^2y''+\beta_0xy'+\gamma_0y=0.
\end{eqnarray*}

Theorem $$(3.4.3)$$ shows that the solutions of this equation are determined by the zeros of the indicial polynomial

\begin{eqnarray*}
p_0(r)=\alpha_0r(r-1)+\beta_0r+\gamma_0.
\end{eqnarray*}

Since \eqref{eq:3.5.14} implies that this is also true for the solutions of $$Ly=0$$, we'll also say that $$p_0$$ is the $$\textcolor{blue}{\mbox{indicial polynomial}}$$ of \eqref{eq:3.5.2}, and that $$p_0(r)=0$$ is the $$\textcolor{blue}{\mbox{indicial equation}}$$ of $$Ly=0$$. We'll consider only cases where the indicial equation has real roots $$r_1$$ and $$r_2$$, with $$r_1\ge r_2$$.

Theorem $$\PageIndex{3}$$

Let $$L$$ and $$\{a_n(r)\}$$ be as in Theorem $$(3.5.2)$$, and suppose the indicial equation $$p_0(r)=0$$ of $$Ly=0$$ has real roots $$r_1$$ and $$r_2,$$ where $$r_1\ge r_2.$$ Then

\begin{eqnarray*}
y_1(x)=y(x,r_1)=x^{r_1}\sum_{n=0}^\infty a_n(r_1)x^n
\end{eqnarray*}

is a Frobenius solution of $$Ly=0$$. Moreover, if $$r_1-r_2$$ isn't an integer then

\begin{eqnarray*}
y_2(x)=y(x,r_2)=x^{r_2}\sum_{n=0}^\infty a_n(r_2)x^n
\end{eqnarray*}

is also a Frobenius solution of $$Ly=0,$$ and $$\{y_1,y_2\}$$ is a fundamental set of solutions.

Proof

Since $$r_1$$ and $$r_2$$ are roots of $$p_0(r)=0$$, the indicial polynomial can be factored as

\label{eq:3.5.15}
p_0(r)=\alpha_0(r-r_1)(r-r_2).

Therefore

\begin{eqnarray*}
p_0(n+r_1)=n\alpha_0(n+r_1-r_2),
\end{eqnarray*}

which is nonzero if $$n>0$$, since $$r_1-r_2\ge0$$. Therefore the assumptions of Theorem $$(3.5.2)$$ hold with $$r=r_1$$, and
\eqref{eq:3.5.14} implies that $$Ly_1=p_0(r_1)x^{r_1}=0$$.

Now suppose $$r_1-r_2$$ isn't an integer. From \eqref{eq:3.5.15},

\begin{eqnarray*}
\end{eqnarray*}

Hence, the assumptions of Theorem $$(3.5.2)$$ hold with $$r=r_2$$, and \eqref{eq:3.5.14} implies that $$Ly_2=p_0(r_2)x^{r_2}=0$$. We leave the proof that $$\{y_1,y_2\}$$ is a fundamental set of solutions as an exercise (Exercise $$(3.5E.52)$$).

It isn't always possible to obtain explicit formulas for the coefficients in Frobenius solutions. However, we can always set up the recurrence relations and use them to compute as many coefficients as we want. The next example illustrates this.

Example $$\PageIndex{1}$$

Find a fundamental set of Frobenius solutions of

\label{eq:3.5.16}
2x^2(1+x+x^2)y''+x(9+11x+11x^2)y'+(6+10x+7x^2)y=0.

Compute just the first six coefficients $$a_0$$, $$\dots$$, $$a_5$$ in each solution.

For the given equation, the polynomials defined in Theorem $$(3.5.2)$$ are

\begin{eqnarray*}
\begin{array}{ccccc}
p_0(r)&=&2r(r-1)+9r+6&=&(2r+3)(r+2),\\
p_1(r)&=&2r(r-1)+11r+10&=&(2r+5)(r+2),\\
p_2(r)&=&2r(r-1)+11r+7&=&(2r+7)(r+1).
\end{array}
\end{eqnarray*}

The zeros of the indicial polynomial $$p_0$$ are $$r_1=-3/2$$ and $$r_2=-2$$, so $$r_1-r_2=1/2$$. Therefore Theorem $$(3.5.3)$$ implies that

\label{eq:3.5.17}
y_1=x^{-3/2}\sum_{n=0}^\infty a_n(-3/2)x^n \quad \mbox{ and} \quad y_2=x^{-2}\sum_{n=0}^\infty a_n(-2)x^n

form a fundamental set of Frobenius solutions of \eqref{eq:3.5.16}. To find the coefficients in these series, we use the recurrence relation of Theorem $$(3.5.2)$$; thus,

\begin{eqnarray*}
a_0(r)&=&1,\\
a_1(r)&=&-\displaystyle{p_1(r)\over p_0(r+1)} =-\displaystyle{(2r+5)(r+2)\over(2r+5)(r+3)} =-\displaystyle{r+2\over r+3},\\
a_n(r)&=&-\displaystyle{p_1(n+r-1)a_{n-1}+p_2(n+r-2)a_{n-2}\over p_0(n+r)}\\
&=&-\displaystyle{(n+r+1)(2n+2r+3)a_{n-1}(r) +(n+r-1)(2n+2r+3)a_{n-2}(r)\over(n+r+2)(2n+2r+3)}\\
\end{eqnarray*}

Setting $$r=-3/2$$ in these equations yields

\label{eq:3.5.18}
\begin{array}{ccl}
a_0(-3/2)&=&1,\\
a_1(-3/2)&=&-1/3,\\
\end{array}

and setting $$r=-2$$ yields

\label{eq:3.5.19}
\begin{array}{ccl}
a_0(-2)&=&1,\\
a_1(-2)&=&0,\\
\end{array}

Calculating with \eqref{eq:3.5.18} and \eqref{eq:3.5.19} and substituting the results into \eqref{eq:3.5.17} yields the fundamental set of Frobenius solutions

\begin{eqnarray*}
y_1&=&x^{-3/2}\left(1-{1\over3}x+{2\over5}x^2-{5\over21}x^3 +{7\over135}x^4+{76\over1155}x^5+\cdots\right),\\
y_2&=&x^{-2}\left(1+{1\over2}x^2-{1\over3}x^3+{1\over8}x^4+{1\over30}x^5 +\cdots\right).
\end{eqnarray*}

## Special Cases With Two Term Recurrence Relations

For $$n\ge2$$, the recurrence relation \eqref{eq:3.5.12} of Theorem $$(3.5.2)$$ involves the three coefficients $$a_n(r)$$, $$a_{n-1}(r)$$, and $$a_{n-2}(r)$$. We'll now consider some special cases where \eqref{eq:3.5.12} reduces to a two term recurrence relation; that is, a relation involving only $$a_n(r)$$ and $$a_{n-1}(r) or only \(a_n(r)$$ and $$a_{n-2}(r)$$. This simplification often makes it possible to obtain explicit formulas for the coefficents of Frobenius solutions.

We first consider equations of the form

\begin{eqnarray*}
x^2(\alpha_0+\alpha_1x)y''+x(\beta_0+\beta_1x)y'+(\gamma_0+\gamma_1x)y=0
\end{eqnarray*}

with $$\alpha_0\ne0$$. For this equation, $$\alpha_2=\beta_2=\gamma_2=0$$, so $$p_2\equiv0$$ and the recurrence relations in Theorem $$(3.5.2)$$ simplify to

\label{eq:3.5.20}
\begin{array}{ccl}
a_0(r)&=&1,\\
\end{array}

Example $$\PageIndex{2}$$

Find a fundamental set of Frobenius solutions of

\label{eq:3.5.21}
x^2(3+x)y''+5x(1+x)y'-(1-4x)y=0.

Give explicit formulas for the coefficients in the solutions.

For this equation, the polynomials defined in Theorem $$(3.5.2)$$ are

\begin{eqnarray*}
\begin{array}{ccccc}
p_0(r)&=&3r(r-1)+5r-1&=&(3r-1)(r+1),\\
p_1(r)&=&r(r-1)+5r+4&=&(r+2)^2,\\
p_2(r)&=&0.
\end{array}
\end{eqnarray*}

The zeros of the indicial polynomial $$p_0$$ are $$r_1=1/3$$ and $$r_2=-1$$, so $$r_1-r_2=4/3$$. Therefore Theorem $$(3.5.3)$$ implies that

\begin{eqnarray*}
\end{eqnarray*}

form a fundamental set of Frobenius solutions of \eqref{eq:3.5.21}. To find the coefficients in these series, we use the recurrence relations \eqref{eq:3.5.20}; thus,

\label{eq:3.5.22}
\begin{array}{ccl}
a_0(r)&=&1,\\
a_n(r)&=&-\displaystyle{p_1(n+r-1)\over p_0(n+r)}a_{n-1}(r)\\
&=&-\displaystyle{(n+r+1)^2\over(3n+3r-1)(n+r+1)}a_{n-1}(r)\\
\end{array}

Setting $$r=1/3$$ in \eqref{eq:3.5.22} yields

\begin{eqnarray*}
a_0(1/3)&=&1,\\
\end{eqnarray*}

By using the product notation introduced in Section $$3.2$$ and proceeding as we did in the examples in that section yields

\begin{eqnarray*}
\end{eqnarray*}

Therefore

\begin{eqnarray*}
y_1=x^{1/3}\sum_{n=0}^\infty{(-1)^n\prod_{j=1}^n(3j+4)\over9^nn!}x^n
\end{eqnarray*}

is a Frobenius solution of \eqref{eq:3.5.21}.

Setting $$r=-1$$ in \eqref{eq:3.5.22} yields

\begin{eqnarray*}
a_0(-1)&=&1,\\
\end{eqnarray*}

so

\begin{eqnarray*}
a_n(-1)={(-1)^nn!\over\prod_{j=1}^n(3j-4)}.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
y_2=x^{-1}\sum_{n=0}^\infty{(-1)^nn!\over\prod_{j=1}^n(3j-4)}x^n
\end{eqnarray*}

is a Frobenius solution of \eqref{eq:3.5.21}, and $$\{y_1,y_2\}$$ is a fundamental set of solutions.

We now consider equations of the form

\label{eq:3.5.23}
x^2(\alpha_0+\alpha_2x^2)y''+x(\beta_0+\beta_2x^2)y'+ (\gamma_0+\gamma_2x^2)y=0

with $$\alpha_0\ne0$$. For this equation, $$\alpha_1=\beta_1=\gamma_1=0$$, so $$p_1\equiv0$$ and the recurrence relations in Theorem $$(3.5.2)$$ simplify to

\begin{eqnarray*}
a_0(r)&=&1,\\
a_1(r)&=&0,\\
\end{eqnarray*}

Since $$a_1(r)=0$$, the last equation implies that $$a_n(r)=0$$ if $$n$$ is odd, so the Frobenius solutions are of the form

\begin{eqnarray*}
y(x,r)=x^r\sum_{m=0}^\infty a_{2m}(r)x^{2m},
\end{eqnarray*}

where

\label{eq:3.5.24}
\begin{array}{ccl}
a_0(r)&=&1,\\
\end{array}

Example $$\PageIndex{3}$$

Find a fundamental set of Frobenius solutions of

\label{eq:3.5.25}
x^2(2-x^2)y''-x(3+4x^2)y'+(2-2x^2)y=0.

Give explicit formulas for the coefficients in the solutions.

For this equation, the polynomials defined in Theorem $$(3.5.2)$$ are

\begin{eqnarray*}
\begin{array}{ccccc}
p_0(r)&=&2r(r-1)-3r+2&=&(r-2)(2r-1),\\
p_1(r)&=&0\\
p_2(r)&=&-\left[r(r-1)+4r+2\right]&=&-(r+1)(r+2).
\end{array}
\end{eqnarray*}

The zeros of the indicial polynomial $$p_0$$ are $$r_1=2$$ and $$r_2=1/2$$, so $$r_1-r_2=3/2$$. Therefore Theorem $$(3.5.3)$$ implies that

\begin{eqnarray*}
\end{eqnarray*}

form a fundamental set of Frobenius solutions of \eqref{eq:3.5.25}. To find the coefficients in these series, we use the recurrence relation \eqref{eq:3.5.24}; thus,

\label{eq:3.5.26}
\begin{array}{ccl}
a_0(r)&=&1,\\
a_{2m}(r)&=&-\displaystyle{p_2(2m+r-2)\over p_0(2m+r)}a_{2m-2}(r)\\
\end{array}

Setting $$r=2$$ in \eqref{eq:3.5.26} yields

\begin{eqnarray*}
a_0(2)&=&1,\\
\end{eqnarray*}

so

\begin{eqnarray*}
a_{2m}(2)=(m+1)\prod_{j=1}^m{2j+1\over4j+3}.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
y_1=x^2\sum_{m=0}^\infty (m+1)\left(\prod_{j=1}^m{2j+1\over4j+3}\right)x^{2m}
\end{eqnarray*}

is a Frobenius solution of \eqref{eq:3.5.25}.

Setting $$r=1/2$$ in \eqref{eq:3.5.26} yields

\begin{eqnarray*}
a_0(1/2)&=&1,\\
\end{eqnarray*}

so

\begin{eqnarray*}
a_{2m}(1/2)={1\over8^mm!}\prod_{j=1}^m{(4j-1)(4j+1)\over4j-3}.
\end{eqnarray*}

Therefore

\begin{eqnarray*}
y_2=x^{1/2}\sum_{m=0}^\infty {1\over8^mm!}\left(\prod_{j=1}^m{(4j-1)(4j+1)\over4j-3}\right)x^{2m}
\end{eqnarray*}

is a Frobenius solution of \eqref{eq:3.5.25} and $$\{y_1,y_2\}$$ is a fundamental set of solutions.

Thus far, we considered only the case where the indicial equation has real roots that don't differ by an integer, which allows us to apply Theorem $$(3.5.3)$$. However, for equations of the form \eqref{eq:3.5.23}, the sequence $$\{a_{2m}(r)\}$$ in \eqref{eq:3.5.24} is defined for $$r=r_2$$ if $$r_1-r_2$$ isn't an $$\textcolor{blue}{\mbox{even}}$$ integer. It can be shown (Exercise $$(3.5E.56)$$) that in this case

\begin{eqnarray*}
y_1=x^{r_1}\sum_{m=0}^\infty a_{2m}(r_1)x^{2m}\quad \mbox{ and }\quad y_2=x^{r_2}\sum_{m=0}^\infty a_{2m}(r_2)x^{2m}
\end{eqnarray*}

form a fundamental set Frobenius solutions of \eqref{eq:3.5.23}}.

As we said at the end of Section $$3.2$$, if you're interested in actually using series to compute numerical approximations to solutions of a differential equation, then whether or not there's a simple closed form for the coefficients is essentially irrelevant; recursive computation is usually more efficient. Since it's also laborious, we encourage you to write short programs to implement recurrence relations on a calculator or computer, even in exercises where this is not specifically required.

In practical use of the method of Frobenius when $$x_0=0$$ is a regular singular point, we're interested in how well the functions

\begin{eqnarray*}
\end{eqnarray*}

approximate solutions to a given equation when $$r_i$$ is a zero of the indicial polynomial. In dealing with the corresponding problem for the case where $$x_0=0$$ is an ordinary point, we used numerical integration to solve the differential equation subject to initial conditions $$y(0)=a_0,\quad y'(0)=a_1$$, and compared the result with values of the Taylor polynomial

\begin{eqnarray*}
T_N(x)=\sum_{n=0}^Na_nx^n.
\end{eqnarray*}

We can't do that here, since in general we can't prescribe arbitrary initial values for solutions of a differential equation at a singular point. Therefore, motivated by Theorem $$(3.5.2)$$ (specifically, \eqref{eq:3.5.14}), we suggest the following procedure.

Verification Procedure

Let $$L$$ and $$Y_n(x; r_i)$$ be defined by

\begin{eqnarray*}
Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y
\end{eqnarray*}

and

\begin{eqnarray*}
y_N(x;r_i)=x^{r_i}\sum_{n=0}^N a_n(r_i)x^n,
\end{eqnarray*}

where the coefficients $$\{a_n(r_i)\}_{n=0}^N$$ are computed as in \eqref{eq:3.5.12}, Theorem $$(3.5.2)$$. Compute the error

\label{eq:3.5.27}
E_N(x;r_i)=x^{-r_i}Ly_N(x;r_i)/\alpha_0

for various values of $$N$$ and various values of $$x$$ in the interval $$(0,\rho)$$, with $$\rho$$ as defined in Theorem $$(3.5.2)$$.

The multiplier $$x^{-r_i}/\alpha_0$$ on the right of \eqref{eq:3.5.27} eliminates the effects of small or large values of $$x^{r_i}$$ near $$x=0$$, and of multiplication by an arbitrary constant. In some exercises you will be asked to estimate the maximum value of $$E_N(x; r_i)$$ on an interval $$(0,\delta]$$ by computing $$E_N(x_m;r_i)$$ at the $$M$$ points $$x_m=m\delta/M,\; m=1$$, $$2$$, $$\dots$$, $$M$$, and finding the maximum of the absolute values:

\label{eq:3.5.28}
\sigma_N(\delta)=\max\{|E_N(x_m;r_i)|,\; m=1,2,\dots,M\}.

(For simplicity, this notation ignores the dependence of the right side of the equation on $$i$$ and $$M$$.)

To implement this procedure, you'll have to write a computer program to calculate $$\{a_n(r_i)\}$$ from the applicable recurrence relation, and to evaluate $$E_N(x;r_i)$$.

The next exercise set contains five exercises specifically identified by \Lex that ask you to implement the verification procedure. These particular exercises were chosen arbitrarily you can just as well formulate such laboratory problems for any of the equations in any of the Exercises $$(3.5E.1)$$ to $$(3.5E.10)$$, $$(3.5E.14)$$ to $$(3.5E.25)$$, and $$(3.5E.28)$$ to $$(3.5E.51)$$.