8.4: Series Solutions About a Regular Singular Point
- Page ID
- 103530
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)In this section we’ll continue to study equations of the form
\[\label{eq:7.4.1} P_0(x)y''+P_1(x)y'+P_2(x)y=0\]
but the emphasis will be different from that of Sections 8.3, where we obtained solutions of Equation \ref{eq:7.4.1} near an ordinary point \(x_0\) in the form of power series in \(x-x_0\). If \(x_0\) is a singular point of Equation \ref{eq:7.4.1}, then the solutions can’t, in general, be represented by power series in \(x-x_0\). Nevertheless, it is often necessary in physical applications to study the behavior of solutions of Equation \ref{eq:7.4.1} near a singular point. Although this can be difficult in the absence of some sort of assumption on the nature of the singular point, equations that satisfy the requirements of the next definition can be solved by series methods discussed here. Fortunately, many equations arising in applications satisfy these requirements.
Let \[\label{eq:7.4.2} y''+p(x)y'+q(x)y=0.\]
If \(x_0\) is a singular point of \ref{eq:7.4.2}, then we say that \(x_0\) is a regular singular point of Equation \ref{eq:7.4.2} if \((x-x_0)p(x)\) and \((x-x_0)^2q(x)\) are both analytic at \(x_0\). Otherwise, \(x_0\) is an irregular singular point.
Bessel’s equation,
\[\label{eq:7.4.3} x^2y''+xy'+(x^2-\nu^2)y=0,\]
which can be written in the form \ref{eq:7.4.2} as
\[y''+{1\over x}y'+{(x^2-\nu^2)\over x^2}y=0,\nonumber\]
has the singular point \(x_0=0\). Since \(xp(x)=1\) and \(x^2q(x)=(x^2-\nu^2)\) are analytic at \(0\), it follows that \(x_0=0\) is a regular singular point of Equation \ref{eq:7.4.3}.
Rewriting Legendre’s equation,
\[\label{eq:7.4.4} (1-x^2)y''-2xy'+\alpha(\alpha+1)y=0,\]
in the form \ref{eq:7.4.2} yields
\[y''-{2x\over 1-x^2}y'+{\alpha(\alpha+1)\over 1-x^2}y=0. \nonumber\]
It's clear that \ref{eq:7.4.4} has the singular points \(x_0=\pm1\). Since \((x-1)p(x)={2x\over x+1}\) and \((x-1)^2q(x)=-{\alpha(\alpha+1)(x-1)\over (x+1)}\), \(x_0=1\) is a regular singular point of Equation \ref{eq:7.4.4}. We leave it to you to show that \(x_0=-1\) is also a regular singular point of Equation \ref{eq:7.4.4}.
The equation
\[x^3y''+xy'+y=0 \nonumber\]
has an irregular singular point at \(x_0=0\). (Verify.)
For convenience we restrict our attention to the case where \(x_0=0\) is a regular singular point of Equation \ref{eq:7.4.2}. This isn’t really a restriction, since if \(x_0\ne0\) is a regular singular point of Equation \ref{eq:7.4.2} then introducing the new independent variable \(t=x-x_0\) and the new unknown \(Y(t)=y(t+x_0)\) leads to a differential equation that has a regular singular point at \(t_0=0\).
The Method of Frobenius
The method of Frobenius deals with three distinct cases satisfying the assumptions introduced above. In all three cases \ref{eq:7.4.2} has at least one solution of the form
\[ y=x^r\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_nx^{n+r}, \nonumber \]
where \(r\) need not be an integer. The problem is that the three possibilities each require a different approach to obtain a second linearly independent solution that we need to form a fundamental pair of solutions of \ref{eq:7.4.2}.
The method we will use to find solutions for regular singular points of \ref{eq:7.4.2} is called the method of Frobenius, and we’ll call them Frobenius solutions.
It can be shown that the power series \(x^r\sum_{n=0}^\infty a_nx^n\) yields a Frobenius solution of Equation \ref{eq:7.4.2} that converges on some open interval \((-\rho,\rho)\), where \(0<\rho\le\infty\). The method of Frobenius will give us what is called an indicial equation, and it's the nature of these roots that leads us to the three cases mentioned above.
In this class we only deal with real roots of the indicial equation, of which there are three cases. If \(0\) is a regular singular point of \ref{eq:7.4.2}, then we have the following:
Case 1: If the roots of the indicial equation are real numbers that differ by a noninteger, then \(y=\sum_{n=0}^\infty a_nx^{n+r}\) will give both of the linearly independent solutions to \ref{eq:7.4.2}.
Case 2: If the roots of the indicial equation are real numbers that differ by a nonzero integer, then \(y=\sum_{n=0}^\infty a_nx^{n+r}\) will give at least one of the two linearly independent solutions to \ref{eq:7.4.2}.
Case 3: If the roots of the indicial equation are real numbers that are the same, then \(y=\sum_{n=0}^\infty a_nx^{n+r}\) will give only one of the two linearly independent solutions to \ref{eq:7.4.2}.
Find a fundamental set of Frobenius solutions and give the general solution of
\[\label{eq:7.4.5} 3xy''+y'-y=0\]
about \(x_0=0\) on \((0,\infty)\).
Solution
Note that \(x_0=0\) is a regular singular point of \ref{eq:7.4.5}. We now let
\[ y=x^r\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_nx^{n+r}. \nonumber \]
Differentiating this series twice gives us
\[y'=\sum_{n=0}^\infty a_n(n+r)x^{n+r-1} \quad\mbox{ and }\quad y''=\sum_{n=0}^\infty a_n(n+r)(n+r-1)x^{n+r-2}.\nonumber\]
Substituting these into \ref{eq:7.4.5} yields
\[\begin{aligned}3x\sum^\infty_{n=0}(n+r)(n+r-1)a_nx^{n+r-2}+\sum^\infty_{n=0}(n+r)a_nx^{n+r-1}-\sum^\infty_{n=0}a_nx^{n+r}=\sum^\infty_{n=0}3(n+r)(n+r-1)a_nx^{n+r-1}+\sum^\infty_{n=0}(n+r)a_nx^{n+r-1}-\sum^\infty_{n=0}a_nx^{n+r}=0\end{aligned}\nonumber \]
Running out terms we get
\[3r(r-1)a_0x^{r-1}+\sum^\infty_{n=1}3(n+r)(n+r-1)a_nx^{n+r-1}+ra_0x^{r-1}+\sum^\infty_{n=1}(n+r)a_nx^{n+r-1}-\sum^\infty_{n=0}a_nx^{n+r}=0.\nonumber\]
Reindexing we get
\[3r(r-1)a_0x^{r-1}+\sum^\infty_{n=1}3(n+r)(n+r-1)a_nx^{n+r-1}+ra_0x^{r-1}+\sum^\infty_{n=1}(n+r)a_nx^{n+r-1}-\sum^\infty_{n=1}a_{n-1}x^{n+r-1}=0.\nonumber\]
We now collect like terms to obtain
\[(3r^2-3r+r)a_0x^{r-1}+\sum^\infty_{n=1}[3(n+r)(n+r-1)a_n+(n+r)a_n-a_{n-1}]x^{n+r-1}=(3r^2-2r)a_0x^{r-1}+\sum^\infty_{n=1}[(n+r)(3n+3r-2)a_n-a_{n-1}]x^{n+r-1}=0.\nonumber\]
Equating coefficients gives us the following
\[\label{eq:7.4.6} (3r^2-2r)a_0=0\]
and
\[(n+r)(3n+3r-2)a_n-a_{n-1}=0\nonumber\]
Equation \ref{eq:7.4.6} is our indicial equation; let's take a closer look as to what it says.
\[(3r^2-2r)a_0=r(3r-2)a_0=0.\nonumber\]
So, we have \(a_0=0\), \(r=0\), or \(r=2/3\). \(a_0=0\) will only give us the trivial solution which is always linearly dependent, so we focus on \(r=0\) and \(r=2/3\). Note that these differ by a noninteger and, according to Frobenius, will lead to two linearly independent solutions of Equation \ref{eq:7.4.5}.
For the case \(r=0\):
\(a_0\) is a free variable (meaning it can be anything we choose) and \(a_n={a_{n-1}\over n(3n-2)}, \quad n=1,2,3,\cdots\)
Now substituting \(n=1,2,3,\cdots\) into the last equation yields
\[\begin{aligned} a_1 &= a_0, \\[4pt]a_2 &= {1\over 8}a_1={1\over 8}a_0\cdots \end{aligned}\nonumber \]
This leads us to one solution of \ref{eq:7.4.5}:
\[\begin{aligned}y_1&=x^0(a_0+a_0x+{1\over 8}a_0x^2+\cdots)\\&=a_0(1+x+{1\over 8}x^2+\cdots)\end{aligned}\nonumber\]
Notice that \(r=0\) gives us the form of an ordinary point power series,
\[y=x^r\sum_{n=0}^\infty a_nx^n=x^0\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_nx^n,\nonumber\]
which only yielded one solution instead of the two we would be guaranteed if \(x_0=0\) was an ordinary point. As you may guess, \(r=2/3\) will yield the second linearly independent solution of \ref{eq:7.4.5}.
For the case \(r=2/3\):
\(a_0\) is once again a free variable and \(a_n={a_{n-1}\over n(3n+2)}, \quad n=1,2,3,\cdots\)
Now substituting \(n=1,2,3,\cdots\) into the last equation yields
\[\begin{aligned} a_1 &= {1\over 5}a_0, \\[4pt]a_2 &= {1\over 16}a_1={1\over 80}a_0\cdots \end{aligned}\nonumber \]
This leads us to the second linearly independent solution of \ref{eq:7.4.5}:
\[\begin{aligned}y_2&=x^{2/3}(a_0+{1\over 5}a_0x+{1\over 80}a_0x^2+\cdots)\\&=a_0x^{2/3}(1+{1\over 5}x+{1\over 80}x^2+\cdots)\end{aligned}\nonumber\]
Note that since we used two separate \(r\) values, the \(a_0\) in each is actually different. So, we need to change one or both of the \(a_0\) to get our general solution of \ref{eq:7.4.5}:
\[y=c_1(1+x+{1\over 8}x^2+\cdots)+c_2x^{2/3}(1+{1\over 5}x+{1\over 80}x^2+\cdots)\nonumber\]
Find a fundamental set of Frobenius solutions and give the general solution of
\[\label{eq:7.4.7} xy''+y=0\]
about \(x_0=0\) on \((0,\infty)\).
Solution
Note that \(x_0=0\) is a regular singular point of \ref{eq:7.4.7}. We now let
\[ y=x^r\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_nx^{n+r}. \nonumber \]
Differentiating this series twice gives us
\[y'=\sum_{n=0}^\infty a_n(n+r)x^{n+r-1} \quad\mbox{ and }\quad y''=\sum_{n=0}^\infty a_n(n+r)(n+r-1)x^{n+r-2}.\nonumber\]
Substituting these into \ref{eq:7.4.7} yields
\[\begin{aligned}x\sum^\infty_{n=0}(n+r)(n+r-1)a_nx^{n+r-2}+\sum^\infty_{n=0}a_nx^{n+r}=\sum^\infty_{n=0}(n+r)(n+r-1)a_nx^{n+r-1}+\sum^\infty_{n=0}a_nx^{n+r}=0\end{aligned}\nonumber \]
Running out terms we get
\[r(r-1)a_0x^{r-1}+\sum^\infty_{n=1}(n+r)(n+r-1)a_nx^{n+r-1}+\sum^\infty_{n=0}a_nx^{n+r}=0.\nonumber\]
Reindexing we get
\[r(r-1)a_0x^{r-1}+\sum^\infty_{n=1}(n+r)(n+r-1)a_nx^{n+r-1}+\sum^\infty_{n=1}a_{n-1}x^{n+r-1}=0.\nonumber\]
We now collect like terms to obtain
\[r(r-1)a_0x^{r-1}+\sum^\infty_{n=1}[(n+r)(n+r-1)a_n+a_{n-1}]x^{n+r-1}=0.\nonumber\]
Equating coefficients gives us the following
\[\label{eq:7.4.8} r(r-1)a_0=0\]
and
\[(n+r)(n+r-1)a_n+a_{n-1}=0\nonumber\]
Equation \ref{eq:7.4.8} is our indicial equation; let's take a closer look as to what it says.
\[r(r-1)a_0=0.\nonumber\]
As in Example 6, \(a_0=0\) will only give us the trivial solution which is always linearly dependent, so we focus on \(r=0\) and \(r=1\). Note that these differ by an integer and, according to Frobenius, we have...no idea what is going to happen...so, we proceed...with hope, but no guarantee, that the method of Frobenius will give us a second linearly independent solution.
For the case \(r=0\):
\(a_0\) is a free variable and \(n(n-1)a_n+a_{n-1}=0, \quad n=1,2,3,\cdots\)
At this point we would normally solve for \(a_n\), but that creates an issue when \(n=1\). So, we need to deal with \(n=1\) before we solve for \(a_n\):
\(n=1\) gives us \(0a_1+a_0=0\), which means \(a_0=0\) and now \(a_1\) is free.
So, now solving for \(a_n\) we get
\(a_n=-{a_{n-1}\over n(n-1)}, \quad n=2,3,4,\cdots\)
Now substituting \(n=2,3,4,\cdots\) into the last equation yields
\[\begin{aligned} a_2 &= -{1\over 2}a_1, \\[4pt]a_3 &= -{1\over 6}a_2={1\over 12}a_1,\\a_4&=-{1\over 12}a_3=-{1\over 144}a_1\cdots \end{aligned}\nonumber \]
This leads us to one solution of \ref{eq:7.4.7}:
\[\begin{aligned}y_1&=x^0(a_1x-{1\over 2}a_1x^2+{1\over 12}a_1x^3-{1\over 144}a_1x^4+\cdots)\\&=a_1(x-{1\over 2}x^2+{1\over 12}x^3-{1\over 144}x^4+\cdots)\end{aligned}\nonumber\]
As in Example 6, note that \(r=0\) gives us the form of an ordinary point power series
\[y=x^r\sum_{n=0}^\infty a_nx^n=x^0\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_nx^n,\nonumber\]
which only yielded one solution instead of the two we would be guaranteed if \(x_0=0\) was an ordinary point. Now, we hope \(r=1\) will yield the second linearly independent solution of \ref{eq:7.4.7}.
For the case \(r=1\):
\(a_0\) is once again a free variable and \(a_n={-a_{n-1}\over n(n+1)}, \quad n=1,2,3,\cdots\)
Now substituting \(n=1,2,3,\cdots\) into the last equation yields
\[\begin{aligned} a_1 &= -{1\over 2}a_0, \\[4pt]a_2 &= -{1\over 6}a_1={1\over 12}a_0,\\a_3&=-{1\over 12}a_2=-{1\over 144}a_0,\cdots \end{aligned}\nonumber \]
This leads us to what we hope is the second linearly independent solution of \ref{eq:7.4.7}:
\[\begin{aligned}y_2&=x(a_0-{1\over 2}a_0x+{1\over 12}a_0x^2-{1\over 144}a_0x^3+\cdots)\\&=a_0(x-{1\over 2}x^2+{1\over 12}x^3-{1\over 144}x^4\cdots)\\&=y_1\end{aligned}\nonumber\]
However, as you can see, this is the same solution we obtained from \(r=0\), so the method of Frobenius did not give us the second linearly independent solution we need. So, that leaves us with a problem since we know there must be two linearly independent solutions to \ref{eq:7.4.7}, but at this point we only have one. If you recall, in section 5.2 we found that if we knew one solution, we could find a second linearly independent solution from
\[y_2=y_1\int{e^{-\int{p(x)dx}}\over y_1^2}dx\nonumber\]
While this may seem daunting, and it is cumbersome, it is fairly straightforward. To make it easier to follow we will do this in parts.
We first note that \(p(x)=0\) and \(y_1=x-{1\over 2}x^2+{1\over 12}x^3-{1\over 144}x^4+\cdots.\)
So,
\[e^{-\int{p(x)dx}}=e^{-\int{0}dx}=c.\nonumber\]
To make our calculations easier we will let \(c=1\) (the constant would be absorbed later anyway). So, \(e^{-\int{p(x)dx}}=e^{-\int{0}dx}=1.\)
Now we calculate
\[y_1^2=(x-{1\over 2}x^2+{1\over 12}x^3-{1\over 144}x^4+\cdots)(x-{1\over 2}x^2+{1\over 12}x^3-{1\over 144}x^4+\cdots)=x^2-x^3+{5\over 12}x^4-{7\over 72}x^5+\cdots\nonumber\].
So,
\[{e^{-\int{p(x)dx}}\over y_1^2}={1\over x^2-x^3+{5\over 12}x^4-{7\over 72}x^5+\cdots}={1\over x^2}+{1\over x}+{7\over 12}+{19\over 72}x+\cdots\nonumber\]
This leads to
\[\int{e^{-\int{p(x)dx}}\over y_1^2}dx=\int({1\over x^2}+{1\over x}+{7\over 12}+{19\over 72}x+\cdots)dx=-{1\over x}+\ln x+{7\over 12}x+{19\over 144}x^2+\cdots\nonumber\]
This finally leads us to
\[y_2=y_1\int{e^{-\int{p(x)dx}}\over y_1^2}dx=y_1(-{1\over x}+\ln x+{7\over 12}x+{19\over 144}x^2+\cdots)=y_1\ln x+y_1(-{1\over x}+{7\over 12}x+{19\over 144}x^2+\cdots)\nonumber\]
and our general solution to \ref{eq:7.4.7} is
\[y=c_1y_1+c_2[y_1\ln x+y_1(-{1\over x}+{7\over 12}x+{19\over 144}x^2+\cdots)]\nonumber\]
We will not discuss the third case, when the roots of the indicial equation are the same, since this clearly leads to only one linearly independent solution and would therefore require the exact same method of Example 7.


