# 7.5: The Method of Frobenius I

- Last updated

- Save as PDF

- Page ID
- 30755

- William F. Trench
- Andrew G. Cowles Distinguished Professor Emeritus (Mathematics) at Trinity University

Sections 7.5-7.7 deal with three distinct cases satisfying the assumptions introduced in Section 7.4. In all three cases, (A) has at least one solution of the form

\[ y_1=x^r\sum_{n=0}^\infty a_nx^n, \nonumber \]

where \(r\) need not be an integer. The problem is that there are three possibilities - each requiring a different approach - for the form of a second solution \(y_2\) such that \(\{y_1,y_2\}\) is a fundamental pair of solutions of (A).

In this section we begin to study series solutions of a homogeneous linear second order differential equation with a regular singular point at \(x_0=0\), so it can be written as

\[\label{eq:7.5.1} x^2A(x)y''+xB(x)y'+C(x)y=0,\]

where \(A\), \(B\), \(C\) are polynomials and \(A(0)\ne0\).

We’ll see that Equation \ref{eq:7.5.1} always has at least one solution of the form

\[y=x^r\sum_{n=0}^\infty a_nx^n \nonumber\]

where \(a_0\ne0\) and \(r\) is a suitably chosen number. The method we will use to find solutions of this form and other forms that we’ll encounter in the next two sections is called *the method of Frobenius*, and we’ll call them *Frobenius solutions*.

It can be shown that the power series \(\sum_{n=0}^\infty a_nx^n\) in a Frobenius solution of Equation \ref{eq:7.5.1} converges on some open interval \((-\rho,\rho)\), where \(0<\rho\le\infty\). However, since \(x^r\) may be complex for negative \(x\) or undefined if \(x=0\), we’ll consider solutions defined for positive values of \(x\). Easy modifications of our results yield solutions defined for negative values of \(x\). (*Exercise 7.5.54*).

We’ll restrict our attention to the case where \(A\), \(B\), and \(C\) are polynomials of degree not greater than two, so Equation \ref{eq:7.5.1} becomes

\[\label{eq:7.5.2} x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y=0,\]

where \(\alpha_i\), \(\beta_i\), and \(\gamma_i\) are real constants and \(\alpha_0\ne0\). Most equations that arise in applications can be written this way. Some examples are

\[ \alpha x^2y''+\beta xy'+\gamma y =0 \quad \text{(Euler's equation)} \nonumber\]

\[ x^2y''+xy'+(x^2-\nu^2)y =0 \quad \text{(Bessel's equation)} \nonumber\]

and

\[xy''+(1-x)y'+\lambda y=0\nonumber \]

where we would multiply the last equation by \(x\) to put it in the form Equation \ref{eq:7.5.2}. However, the method of Frobenius can be extended to the case where \(A\), \(B\), and \(C\) are functions that can be represented by power series in \(x\) on some interval that contains zero, and \(A_0(0)\ne0\) (*Exercises 7.5.57* and* 7.5.58*).

The next two theorems will enable us to develop systematic methods for finding Frobenius solutions of Equation \ref{eq:7.5.2}.

##### Theorem 7.5.1

Let

\[Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y,\nonumber\]

and define

\[\begin{aligned} p_0(r)&=\alpha_0r(r-1)+\beta_0r+\gamma_0,\\[4pt] p_1(r)&=\alpha_1r(r-1)+\beta_1r+\gamma_1,\\[4pt] p_2(r)&=\alpha_2r(r-1)+\beta_2r+\gamma_2.\\\end{aligned}\nonumber \]

Suppose the series

\[\label{eq:7.5.3} y=\sum_{n=0}^\infty a_nx^{n+r}\]

converges on \((0,\rho)\). Then

\[\label{eq:7.5.4} Ly=\sum_{n=0}^\infty b_nx^{n+r}\]

on \((0,\rho),\) where

\[b_{0}=p_{0}(r)a_{0}\nonumber \]

\[\label{eq:7.5.5} b_{1}=p_{0}(r+1)a_{1}+p_{1}(r)a_{0}\]

\[b_n=p_0(n+r)a_n+p_1(n+r-1)a_{n-1}+p_2(n+r-2)a_{n-2},\quad n\ge2\nonumber\]

**Proof**-
We begin by showing that if \(y\) is given by Equation \ref{eq:7.5.3} and \(\alpha\), \(\beta\), and \(\gamma\) are constants, then

\[\label{eq:7.5.6} \alpha x^2y''+\beta xy'+\gamma y= \sum_{n=0}^\infty p(n+r)a_nx^{n+r},\]

where

\[p(r)=\alpha r(r-1)+\beta r +\gamma. \nonumber\]

Differentiating twice yields

\[\label{eq:7.5.7} y'=\sum_{n=0}^\infty (n+r)a_nx^{n+r-1}\]

and

\[\label{eq:7.5.8} y''=\sum_{n=0}^\infty (n+r)(n+r-1)a_nx^{n+r-2}.\]

Multiplying Equation \ref{eq:7.5.7} by \(x\) and Equation \ref{eq:7.5.8} by \(x^2\) yields

\[xy'=\sum_{n=0}^\infty (n+r)a_nx^{n+r} \nonumber\]

and

\[x^2y''=\sum_{n=0}^\infty (n+r)(n+r-1)a_nx^{n+r}. \nonumber\]

Therefore

\[\begin{aligned} \alpha x^2y''+\beta xy'+\gamma y &=\sum_{n=0}^\infty\left[\alpha(n+r)(n+r-1)+\beta(n+r)+\gamma\right]a_n x^{n+r}\\[4pt] &=\sum_{n=0}^\infty p(n+r)a_nx^{n+r},\end{aligned}\nonumber \]

which proves Equation \ref{eq:7.5.6}.

Multiplying Equation \ref{eq:7.5.6} by \(x\) yields

\[\label{eq:7.5.9} x(\alpha x^2y''+\beta xy'+\gamma y)=\sum_{n=0}^\infty p(n+r) a_nx^{n+r+1}= \sum_{n=1}^\infty p(n+r-1)a_{n-1}x^{n+r}.\]

Multiplying Equation \ref{eq:7.5.6} by \(x^2\) yields

\[\label{eq:7.5.10} x^2(\alpha x^2y''+\beta xy'+\gamma y)=\sum_{n=0}^\infty p(n+r)a_nx^{n+r+2}= \sum_{n=2}^\infty p(n+r-2)a_{n-2}x^{n+r}.\]

To use these results, we rewrite

\[Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y \nonumber\]

as

\[\label{eq:7.5.11} \begin{array}{ccl} Ly&=\left(\alpha_0x^2y''+\beta_0xy' +\gamma_0y\right) + x\left(\alpha_1x^2y''+\beta_1xy'+\gamma_1y\right) \\&+\ x^2\left(\alpha_2x^2y''+\beta_2xy'+\gamma_2y\right). \end{array}\]

From Equation \ref{eq:7.5.6} with \(p=p_0\),

\[\alpha_0x^2y''+\beta_0xy'+\gamma_0y=\sum_{n=0}^\infty p_0(n+r)a_nx^{n+r}. \nonumber\]

From Equation \ref{eq:7.5.9} with \(p=p_1\),

\[x\left(\alpha_1x^2y''+\beta_1xy'+\gamma_1y\right)=\sum_{n=1}^\infty p_1(n+r-1)a_{n-1}x^{n+r}. \nonumber\]

From Equation \ref{eq:7.5.10} with \(p=p_2\),

\[x^2\left(\alpha_2x^2y''+\beta_2xy'+\gamma_2y\right)=\sum_{n=2}^\infty p_2(n+r-2)a_{n-2}x^{n+r}. \nonumber\]

Therefore we can rewrite Equation \ref{eq:7.5.11} as

\[\begin{aligned} Ly=\sum_{n=0}^\infty p_0(n+r)a_nx^{n+r}+ \sum_{n=1}^\infty p_1(n+r-1)a_{n-1}x^{n+r}\\[4pt]+ \sum_{n=2}^\infty p_2(n+r-2)a_{n-2}x^{n+r},\end{aligned}\nonumber\]

or

\[\begin{aligned} Ly&= p_0(r)a_0x^r+\left[p_0(r+1)a_1+p_1(r)a_0\right]x^{r+1}\\& +\sum_{n=2}^\infty\left[p_0(n+r)a_n+p_1(n+r-1)a_{n-1} +p_2(n+r-2)a_{n-2}\right]x^{n+r},\end{aligned}\nonumber\]

which implies Equation \ref{eq:7.5.4} with \(\{b_n\}\) defined as in Equation \ref{eq:7.5.5}.

##### Theorem 7.5.2

Let

\[Ly= x^2(\alpha_0+\alpha_1x+\alpha_2x^2)y''+x(\beta_0+\beta_1x+\beta_2x^2)y' +(\gamma_0+\gamma_1x+\gamma_2x^2)y, \nonumber\]

where \(\alpha_0\ne0,\) and define

\[\begin{aligned} p_0(r)&=\alpha_0r(r-1)+\beta_0r+\gamma_0,\\[4pt] p_1(r)&=\alpha_1r(r-1)+\beta_1r+\gamma_1,\\[4pt] p_2(r)&=\alpha_2r(r-1)+\beta_2r+\gamma_2.\\\end{aligned}\nonumber \]

Suppose \(r\) is a real number such that \(p_0(n+r)\) is nonzero for all positive integers \(n.\) Define

\[\label{eq:7.5.12} \begin{array}{ccl} a_0(r)&=1,\\ a_1(r)&=-{p_1(r)\over p_0(r+1)},\\[4pt] a_n(r)&=-{p_1(n+r-1)a_{n-1}(r)+p_2(n+r-2)a_{n-2}(r)\over p_0(n+r)},\quad n\ge2. \end{array}\]

Then the Frobenius series

\[\label{eq:7.5.13} y(x,r)=x^r\sum_{n=0}^\infty a_n(r)x^n\]

converges and satisfies

\[\label{eq:7.5.14} Ly(x,r)=p_0(r)x^r\]

on the interval \((0,\rho),\) where \(\rho\) is the distance from the origin to the nearest zero of \(A(x)=\alpha_0+\alpha_1 x+\alpha_2 x^2\) in the complex plane (if \(A\) is constant, then \(\rho=\infty\).)

If \(\{a_n(r)\}\) is determined by the recurrence relation Equation \ref{eq:7.5.12} then substituting \(a_n=a_n(r)\) into Equation \ref{eq:7.5.5} yields \(b_0=p_0(r)\) and \(b_n=0\) for \(n\ge1\), so Equation \ref{eq:7.5.4} reduces to Equation \ref{eq:7.5.14}. We omit the proof that the series Equation \ref{eq:7.5.13} converges on \((0,\rho)\).

If \(\alpha_i=\beta_i=\gamma_i=0\) for \(i=1\), \(2,\) then \(Ly=0\) reduces to the Euler equation

\[\alpha_0x^2y''+\beta_0xy'+\gamma_0y=0. \nonumber\]

Theorem 7.4.3 shows that the solutions of this equation are determined by the zeros of the indicial polynomial

\[p_0(r)=\alpha_0r(r-1)+\beta_0r+\gamma_0. \nonumber\]

Since Equation \ref{eq:7.5.14} implies that this is also true for the solutions of \(Ly=0\), we’ll also say that \(p_0\) is the *indicial polynomial* of Equation \ref{eq:7.5.2}, and that \(p_0(r)=0\) is the *indicial equation* of \(Ly=0\). We’ll consider only cases where the indicial equation has real roots \(r_1\) and \(r_2\), with \(r_1\ge r_2\).

##### Theorem 7.5.3

Let \(L\) and \(\{a_n(r)\}\) be as in Theorem 7.5.2 , and suppose the indicial equation \(p_0(r)=0\) of \(Ly=0\) has real roots \(r_1\) and \(r_2,\) where \(r_1\ge r_2.\) Then

\[y_1(x)=y(x,r_1)=x^{r_1}\sum_{n=0}^\infty a_n(r_1)x^n \nonumber\]

is a Frobenius solution of \(Ly=0\). Moreover\(,\) if \(r_1-r_2\) is not an integer then

\[y_2(x)=y(x,r_2)=x^{r_2}\sum_{n=0}^\infty a_n(r_2)x^n \nonumber\]

is also a Frobenius solution of \(Ly=0,\) and \(\{y_1,y_2\}\) is a fundamental set of solutions.

**Proof**-
Since \(r_1\) and \(r_2\) are roots of \(p_0(r)=0\), the indicial polynomial can be factored as

\[\label{eq:7.5.15} p_0(r)=\alpha_0(r-r_1)(r-r_2).\]

Therefore

\[p_0(n+r_1)=n\alpha_0(n+r_1-r_2), \nonumber\]

which is nonzero if \(n>0\), since \(r_1-r_2\ge0\). Therefore the assumptions of Theorem 7.5.2 hold with \(r=r_1\), and Equation \ref{eq:7.5.14} implies that \(Ly_1=p_0(r_1)x^{r_1}=0\).

Now suppose \(r_1-r_2\) is not an integer. From Equation \ref{eq:7.5.15},

\[p_0(n+r_2)=n\alpha_0(n-r_1+r_2)\ne0 \quad \text{if} \quad n=1,2,\cdots.\nonumber\]

Hence, the assumptions of Theorem 7.5.2 hold with \(r=r_2\), and Equation \ref{eq:7.5.14} implies that \(Ly_2=p_0(r_2)x^{r_2}=0\). We leave the proof that \(\{y_1,y_2\}\) is a fundamental set of solutions as an

*Exercise 7.5.52*.

It is not always possible to obtain explicit formulas for the coefficients in Frobenius solutions. However, we can always set up the recurrence relations and use them to compute as many coefficients as we want. The next example illustrates this.

##### Example 7.5.1

Find a fundamental set of Frobenius solutions of

\[\label{eq:7.5.16} 2x^2(1+x+x^2)y''+x(9+11x+11x^2)y'+(6+10x+7x^2)y=0.\]

Compute just the first six coefficients \(a_0\),…, \(a_5\) in each solution.

**Solution**

For the given equation, the polynomials defined in Theorem 7.5.2 are

\[\begin{array}{ccccc} p_0(r)&=2r(r-1)+9r+6&=(2r+3)(r+2),\\[4pt] p_1(r)&=2r(r-1)+11r+10&=(2r+5)(r+2),\\ [5pt] p_2(r)&=2r(r-1)+11r+7&=(2r+7)(r+1). \end{array}\nonumber \]

The zeros of the indicial polynomial \(p_0\) are \(r_1=-3/2\) and \(r_2=-2\), so \(r_1-r_2=1/2\). Therefore Theorem 7.5.3 implies that

\[\label{eq:7.5.17} y_1=x^{-3/2}\sum_{n=0}^\infty a_n(-3/2)x^n\quad\mbox{ and }\quad y_2=x^{-2}\sum_{n=0}^\infty a_n(-2)x^n\]

form a fundamental set of Frobenius solutions of Equation \ref{eq:7.5.16}. To find the coefficients in these series, we use the recurrence relation of Theorem 7.5.2 ; thus,

\[\begin{aligned} a_0(r)&=1,\\ a_1(r)&=-{p_1(r)\over p_0(r+1)} =-{(2r+5)(r+2)\over(2r+5)(r+3)} =-{r+2\over r+3},\\[4pt] a_n(r)&=-{p_1(n+r-1)a_{n-1}+p_2(n+r-2)a_{n-2}\over p_0(n+r)}\\[4pt] &=-{(n+r+1)(2n+2r+3)a_{n-1}(r) +(n+r-1)(2n+2r+3)a_{n-2}(r)\over(n+r+2)(2n+2r+3)}\\[4pt] &=-{(n+r+1)a_{n-1}(r)+(n+r-1)a_{n-2}(r)\over n+r+2},\quad n\ge2.\end{aligned}\nonumber \]

Setting \(r=-3/2\) in these equations yields

\[\label{eq:7.5.18} \begin{array}{lll} a_0(-3/2)&=1,\\ a_1(-3/2)&=-1/3,\\ a_n(-3/2)&=-{(2n-1)a_{n-1}(-3/2)+ (2n-5)a_{n-2}(-3/2)\over2n+1},\quad n\ge2, \end{array}\]

and setting \(r=-2\) yields

\[\label{eq:7.5.19} \begin{array}{lll} a_0(-2)&=1,\\ a_1(-2)&=0,\\ a_n(-2)&=-{(n-1)a_{n-1}(-2)+(n-3)a_{n-2}(-2)\over n},\quad n\ge2. \end{array}\]

Calculating with Equation \ref{eq:7.5.18} and Equation \ref{eq:7.5.19} and substituting the results into Equation \ref{eq:7.5.17} yields the fundamental set of Frobenius solutions

\[\begin{aligned} y_1&=x^{-3/2}\left(1-{1\over3}x+{2\over5}x^2-{5\over21}x^3 +{7\over135}x^4+{76\over1155}x^5+\cdots\right),\\[4pt] y_2&=x^{-2}\left(1+{1\over2}x^2-{1\over3}x^3+{1\over8}x^4+{1\over30}x^5 +\cdots\right).\end{aligned}\nonumber \]

## Special Cases With Two Term Recurrence Relations

For \(n\ge2\), the recurrence relation Equation \ref{eq:7.5.12} of Theorem 7.5.2 involves the three coefficients \(a_n(r)\), \(a_{n-1}(r)\), and \(a_{n-2}(r)\). We’ll now consider some special cases where Equation \ref{eq:7.5.12} reduces to a two term recurrence relation; that is, a relation involving only \(a_n(r)\) and \(a_{n-1}(r)\) or only \(a_n(r)\) and \(a_{n-2}(r)\). This simplification often makes it possible to obtain explicit formulas for the coefficents of Frobenius solutions.

We first consider equations of the form

\[x^2(\alpha_0+\alpha_1x)y''+x(\beta_0+\beta_1x)y'+(\gamma_0+\gamma_1x)y=0 \nonumber\]

with \(\alpha_0\ne0\). For this equation, \(\alpha_2=\beta_2=\gamma_2=0\), so \(p_2\equiv0\) and the recurrence relations in Theorem 7.5.2 simplify to

\[\label{eq:7.5.20} \begin{array}{lll} a_0(r)&=1,\\ a_n(r)&=-{p_1(n+r-1)\over p_0(n+r)}a_{n-1}(r),\quad n\ge1. \end{array}\]

##### Example 7.5.2

Find a fundamental set of Frobenius solutions of

\[\label{eq:7.5.21} x^2(3+x)y''+5x(1+x)y'-(1-4x)y=0.\]

Give explicit formulas for the coefficients in the solutions.

**Solution**

For this equation, the polynomials defined in Theorem 7.5.2 are

\[\begin{array}{ccccc} p_0(r)&=3r(r-1)+5r-1&=(3r-1)(r+1),\\[4pt] p_1(r)&=r(r-1)+5r+4&=(r+2)^2,\\[4pt] p_2(r)&=0. \end{array}\nonumber\]

The zeros of the indicial polynomial \(p_0\) are \(r_1=1/3\) and \(r_2=-1\), so \(r_1-r_2=4/3\). Therefore Theorem 7.5.3 implies that

\[y_1=x^{1/3}\sum_{n=0}^\infty a_n(1/3)x^n\quad\mbox{ and }\quad y_2=x^{-1}\sum_{n=0}^\infty a_n(-1)x^n\nonumber \]

form a fundamental set of Frobenius solutions of Equation \ref{eq:7.5.21}. To find the coefficients in these series, we use the recurrence relationss Equation \ref{eq:7.5.20} ; thus,

\[\label{eq:7.5.22} \begin{array}{lll} a_0(r)&=1,\\ a_n(r)&=-{p_1(n+r-1)\over p_0(n+r)}a_{n-1}(r)\\[4pt] &=-{(n+r+1)^2\over(3n+3r-1)(n+r+1)}a_{n-1}(r)\\[4pt] &=-{n+r+1\over3n+3r-1}a_{n-1}(r),\quad n\ge1. \end{array}\]

Setting \(r=1/3\) in Equation \ref{eq:7.5.22} yields

\[\begin{aligned} a_0(1/3)&=1,\\ a_n(1/3)&=-{3n+4\over9n} a_{n-1}(1/3),\quad n\ge1.\end{aligned}\nonumber \]

By using the product notation introduced in Section 7.2 and proceeding as we did in the examples in that section yields

\[a_n(1/3)={(-1)^n\prod_{j=1}^n(3j+4)\over9^nn!},\quad n\ge0. \nonumber\]

Therefore

\[y_1=x^{1/3}\sum_{n=0}^\infty{(-1)^n\prod_{j=1}^n(3j+4)\over9^nn!}x^n \nonumber\]

is a Frobenius solution of Equation \ref{eq:7.5.21}.

Setting \(r=-1\) in Equation \ref{eq:7.5.22} yields

\[\begin{aligned} a_0(-1)&=1,\\ a_n(-1)&=-{n\over3n-4}a_{n-1}(-1),\quad n\ge1,\end{aligned}\nonumber \]

so

\[a_n(-1)={(-1)^nn!\over\prod_{j=1}^n(3j-4)}. \nonumber\]

Therefore

\[y_2=x^{-1}\sum_{n=0}^\infty{(-1)^nn!\over\prod_{j=1}^n(3j-4)}x^n \nonumber\]

is a Frobenius solution of Equation \ref{eq:7.5.21}, and \(\{y_1,y_2\}\) is a fundamental set of solutions.

We now consider equations of the form

\[\label{eq:7.5.23} x^2(\alpha_0+\alpha_2x^2)y''+x(\beta_0+\beta_2x^2)y'+ (\gamma_0+\gamma_2x^2)y=0\]

with \(\alpha_0\ne0\). For this equation, \(\alpha_1=\beta_1=\gamma_1=0\), so \(p_1\equiv0\) and the recurrence relations in Theorem 7.5.2 simplify to

\[\begin{aligned} a_0(r)&=1,\\ a_1(r)&=0,\\[4pt] a_n(r)&=-{p_2(n+r-2)\over p_0(n+r)}a_{n-2}(r),\quad n\ge2.\end{aligned}\nonumber \]

Since \(a_1(r)=0\), the last equation implies that \(a_n(r)=0\) if \(n\) is odd, so the Frobenius solutions are of the form

\[y(x,r)=x^r\sum_{m=0}^\infty a_{2m}(r)x^{2m}, \nonumber\]

where

\[\label{eq:7.5.24} \begin{array}{lll} a_0(r)&=1,\\ a_{2m}(r)&=-{p_2(2m+r-2)\over p_0(2m+r)}a_{2m-2}(r),\quad m\ge1. \end{array}\]

##### Example 7.5.3

Find a fundamental set of Frobenius solutions of

\[\label{eq:7.5.25} x^2(2-x^2)y''-x(3+4x^2)y'+(2-2x^2)y=0.\]

Give explicit formulas for the coefficients in the solutions.

**Solution**

For this equation, the polynomials defined in Theorem 7.5.2 are

\[\begin{array}{ccccc} p_0(r)&=2r(r-1)-3r+2&=(r-2)(2r-1),\\[4pt] p_1(r)&=0\\[4pt] p_2(r)&=-\left[r(r-1)+4r+2\right]&=-(r+1)(r+2). \end{array}\nonumber \]

The zeros of the indicial polynomial \(p_0\) are \(r_1=2\) and \(r_2=1/2\), so \(r_1-r_2=3/2\). Therefore Theorem 7.5.3 implies that

\[y_1=x^2\sum_{m=0}^\infty a_{2m}(1/3)x^{2m}\quad\mbox{ and }\quad y_2=x^{1/2}\sum_{m=0}^\infty a_{2m}(1/2)x^{2m} \nonumber\]

form a fundamental set of Frobenius solutions of Equation \ref{eq:7.5.25}. To find the coefficients in these series, we use the recurrence relation Equation \ref{eq:7.5.24} ; thus,

\[\label{eq:7.5.26} \begin{array}{lll} a_0(r)&=1,\\ a_{2m}(r)&=-{p_2(2m+r-2)\over p_0(2m+r)}a_{2m-2}(r)\\[4pt] &={(2m+r)(2m+r-1)\over(2m+r-2)(4m+2r-1)}a_{2m-2}(r),\quad m\ge1. \end{array}\]

Setting \(r=2\) in Equation \ref{eq:7.5.26} yields

\[\begin{aligned} a_0(2)&=1,\\ a_{2m}(2)&={(m+1)(2m+1)\over m(4m+3)}a_{2m-2}(2),\quad m\ge1,\end{aligned}\nonumber\]

so

\[a_{2m}(2)=(m+1)\prod_{j=1}^m{2j+1\over4j+3}. \nonumber\]

Therefore

\[y_1=x^2\sum_{m=0}^\infty (m+1)\left(\prod_{j=1}^m{2j+1\over4j+3}\right)x^{2m} \nonumber\]

is a Frobenius solution of Equation \ref{eq:7.5.25}.

Setting \(r=1/2\) in Equation \ref{eq:7.5.26} yields

\[\begin{aligned} a_0(1/2)&=1,\\ a_{2m}(1/2)&={(4m-1)(4m+1)\over8m(4m-3)}a_{2m-2}(1/2),\quad m\ge1,\end{aligned}\nonumber \]

so

\[a_{2m}(1/2)={1\over8^mm!}\prod_{j=1}^m{(4j-1)(4j+1)\over4j-3}. \nonumber\]

Therefore

\[y_2=x^{1/2}\sum_{m=0}^\infty {1\over8^mm!}\left(\prod_{j=1}^m{(4j-1)(4j+1)\over4j-3}\right)x^{2m} \nonumber\]

is a Frobenius solution of Equation \ref{eq:7.5.25} and \(\{y_1,y_2\}\) is a fundamental set of solutions.

##### Note

Thus far, we considered only the case where the indicial equation has real roots that don’t differ by an integer, which allows us to apply Theorem 7.5.3
. However, for equations of the form Equation \ref{eq:7.5.23}, the sequence \(\{a_{2m}(r)\}\) in Equation \ref{eq:7.5.24} is defined for \(r = r_{2}\) if \(r_{1} − r_{2}\) isn’t an even integer. It can be shown *Exercise 7.5.56* that in this case

\[y_{1}=x^{r_{1}}\sum_{m=0}^{\infty}a_{2m}(r_{1})x^{2m}\quad\text{and}\quad y_{2}=x^{r_{2}}\sum_{m=0}^{\infty}a_{2m}(r_{2})x^{2m}\nonumber\]

form a fundamental set of Frobenius solutions of Equation \ref{eq:7.5.23}.

## Using Technology

As we said at the end of Section 7.2, if you’re interested in actually using series to compute numerical approximations to solutions of a differential equation, then whether or not there’s a simple closed form for the coefficents is essentially irrelevant; recursive computation is usually more efficient. Since it is also laborious, we encourage you to write short programs to implement recurrence relations on a calculator or computer, even in exercises where this is not specifically required.

In practical use of the method of Frobenius when \(x_0=0\) is a regular singular point, we are interested in how well the functions

\[y_N(x,r_i)=x^{r_i}\sum_{n=0}^N a_n(r_i)x^n,\quad i=1,2, \nonumber\]

approximate solutions to a given equation when \(r_i\) is a zero of the indicial polynomial. In dealing with the corresponding problem for the case where \(x_0=0\) is an ordinary point, we used numerical integration to solve the differential equation subject to initial conditions \(y(0)=a_0,\quad y'(0)=a_1\), and compared the result with values of the Taylor polynomial

\[T_N(x)=\sum_{n=0}^Na_nx^n. \nonumber\]

We can’t do that here, since in general we can’t prescribe arbitrary initial values for solutions of a differential equation at a singular point. Therefore, motivated by Theorem 7.5.2 (specifically, Equation \ref{eq:7.5.14} ), we suggest the following procedure.

### Verification Procedure

Let \(L\) and \(Y_{n}(x; r_{i})\) be defined by

\[L_{y}=x^{2}(\alpha _{0}+\alpha _{1}x +\alpha _{2}x^{2})y'' + x(\beta _{0}+\beta_{1}x +\beta _{2}x^{2})y' + (\gamma _{0}+\gamma _{1}x+\gamma _{2}x^{2})y\nonumber\]

and

\[ y_{N}(x; r_{i})=x^{r_{i}}\sum_{n=0}^{N}a_{n}(r_{i})x^{n}\nonumber \]

where the coefficients \(\{a_{n}(r_{i})\}_{n=0}^{N}\) are computed as in Equation \ref{eq:7.5.12}, Theorem 7.5.2 . Compute the error

\[\label{eq:7.5.27} E_{N}(x; r_{i})=x^{-r_{i}}L_{yN}(x; r_{i})/ \alpha _{0}\]

for various values of \(N\) and various values of \(x\) in the interval \((0,\rho )\) with \(\rho\) as defined in Theorem 7.5.2

The multiplier \(x^{-r_i}/\alpha_0\) on the right of Equation \ref{eq:7.5.27} eliminates the effects of small or large values of \(x^{r_i}\) near \(x=0\), and of multiplication by an arbitrary constant. In some exercises you will be asked to estimate the maximum value of \(E_N(x; r_i)\) on an interval \((0,\delta]\) by computing \(E_N(x_m;r_i)\) at the \(M\) points \(x_m=m\delta/M,\; m=1\), \(2\), …, \(M\), and finding the maximum of the absolute values:

\[\label{eq:7.5.28} \sigma_N(\delta)=\max\{|E_N(x_m;r_i)|,\; m=1,2,\dots,M\}.\]

(For simplicity, this notation ignores the dependence of the right side of the equation on \(i\) and \(M\).)

To implement this procedure, you’ll have to write a computer program to calculate \(\{a_n(r_i)\}\) from the applicable recurrence relation, and to evaluate \(E_N(x;r_i)\).

The next exercise set contains five exercises specifically identified by that ask you to implement the verification procedure. These particular exercises were chosen arbitrarily you can just as well formulate such laboratory problems for any of the equations in any of the *Exercises 7.5.1-7.5.10, 7.5.14-7.4.25*, and *7.5.28-7.5.51 .*