Skip to main content
Mathematics LibreTexts

4.3: Singular Points

  • Page ID
    91065
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    The power series method does not always give us the full general solution to a differential equation. Problems can arise when the differential equation has singular points. The simplest equations having singular points are Cauchy-Euler equations,

    \[a x^{2} y^{\prime \prime}+b x y^{\prime}+c y=0 \nonumber \]

    A few examples are sufficient to demonstrate the types of problems that can occur.

    Example \(\PageIndex{1}\)

    Find the series solutions for the Cauchy-Euler equation,

    \[a x^{2} y^{\prime \prime}+b x y^{\prime}+c y=0 \nonumber \]

    for the cases i. \(a=1, b=-4, c=6\), ii. \(a=1, b=2, c=-6\), and iii. \(a=1, b=1, c=6\).

    Solution

    As before, we insert

    \(y(x)=\sum_{n=0}^{\infty} d_{n} x^{n}, \quad y^{\prime}(x)=\sum_{n=1}^{\infty} n d_{n} x^{n-1}, \quad y^{\prime \prime}(x)=\sum_{n=2}^{\infty} n(n-1) d_{n} x^{n-2}\)

    into the differential equation to obtain

    \[\begin{aligned} 0 &=a x^{2} y^{\prime \prime}+b x y^{\prime}+c y \

    \[4pt] &=a x^{2} \sum_{n=2}^{\infty} n(n-1) d_{n} x^{n-2}+b x \sum_{n=1}^{\infty} n d_{n} x^{n-1}+c \sum_{n=0}^{\infty} d_{n} x^{n} \

    \[4pt] &=a \sum_{n=0}^{\infty} n(n-1) d_{n} x^{n}+b \sum_{n=0}^{\infty} n d_{n} x^{n}+c \sum_{n=0}^{\infty} d_{n} x^{n} \

    \[4pt] &=\sum_{n=0}^{\infty}[a n(n-1)+b n+c] d_{n} x^{n} \end{aligned} \end{equation}\label{4.25} \]

    Here we changed the lower limits on the first sums as \(n(n-1)\) vanishes for \(n=0,1\) and the added terms all are zero.

    Setting all coefficients to zero, we have

    \[\left[a n^{2}+(b-a) n+c\right] d_{n}=0, \quad n=0,1, \ldots\nonumber \]

    Therefore, all of the coefficients vanish, \(d_{n}=0\), except at the roots of \(a n^{2}+(b-a) n+c=0\).

    In the first case, \(a=1, b=-4\), and \(c=6\), we have

    \[0=n^{2}+(-4-1) n+6=n^{2}-5 n+6=(n-2)(n-3)\nonumber \]

    Thus, \(d_{n}=0, n \neq 2,3\). This leaves two terms in the series, reducing to the polynomial \(y(x)=d_{2} x^{2}+d_{3} x^{3}\).

    In the second case, \(a=1, b=2\), and \(c=-6\), we have

    \[0=n^{2}+(2-1) n-6=n^{2}+n-6=(n-2)(n+3) \nonumber \]

    Thus, \(d_{n}=0, n \neq 2,-3 .\) Since the \(n^{\prime}\) s are nonnegative, this leaves one term in the solution, \(y(x)=d_{2} x^{2} .\) So, we do not have the most general solution since we are missing a second linearly independent solution. We can use the Method of Reduction of Order from Section 2.2.1, or we could use what we know about Cauchy-Euler equations, to show that the general solution is

    \[y(x)=c_{1} x^{2}+c_{2} x^{-3} \nonumber \]

    Finally, the third case has \(a=1, b=1\), and \(c=6\), we have

    \[0=n^{2}+(1-1) n+6=n^{2}+6 \nonumber \]

    Since there are no real solutions to this equation, \(d_{n}=0\) for all \(n\). Again, we could use what we know about Cauchy-Euler equations, to show that the general solution is

    \[y(x)=c_{1} \cos (\sqrt{6} \ln x)+c_{2} \sin (\sqrt{6} \ln x)\nonumber \]

    In the last example, we have seen that the power series method does not always work. The key is to write the differential equation in the form

    \[y^{\prime \prime}(x)+p(x) y^{\prime}(x)+q(x) y(x)=0 \nonumber \]

    We already know that \(x=0\) is a singular point of the Cauchy-Euler equation. Putting the equation in the latter form, we have

    \[y^{\prime \prime}+\dfrac{a}{x} y^{\prime}+\dfrac{b}{x^{2}} y=0 \nonumber \]

    We see that \(p(x)=a / x\) and \(q(x)=b / x^{2}\) are not defined at \(x=0\). So, we do not expect a convergent power series solution in the neighborhood of \(x=0\).

    Theorem \(\PageIndex{1}\)

    The initial value problem

    \[y^{\prime \prime}(x)+p(x) y^{\prime}(x)+q(x) y(x)=0, \quad y\left(x_{0}\right)=\alpha, \quad y^{\prime}\left(x_{0}\right)=\beta \nonumber \]

    has a unique Taylor series solution converging in the interval \(\left|x-x_{0}\right|<R\) if both \(p(x)\) and \(q(x)\) can be represented by convergent Taylor series converging for \(\left|x-x_{0}\right|<R\). (Then, \(p(x)\) and \(q(x)\) are said to be analytic at \(x=x_{0}\).) As noted earlier, \(x_{0}\) is then called an ordinary point. Otherwise, if either, or both, \(p(x)\) and \(q(x)\) are not analytic at \(x_{0}\), then \(x_{0}\) is called a singular point.

    Example \(\PageIndex{2}\)

    Determine if a power series solution exits for \(x y^{\prime \prime}+2 y^{\prime}+x y=0\) near \(x=0\).

    Solution

    Putting this equation in the form

    \[y^{\prime \prime}+\dfrac{2}{x} y^{\prime}+2 y=0 \nonumber \]

    we see that \(a(x)\) is not defined at \(x=0\), so \(x=0\) is a singular point. Let’s see how far we can get towards obtaining a series solution.

    We let

    \(y(x)=\sum_{n=0}^{\infty} c_{n} x^{n}, \quad y^{\prime}(x)=\sum_{n=1}^{\infty} n c_{n} x^{n-1}, \quad y^{\prime \prime}(x)=\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}\),

    into the differential equation to obtain

    \[ \begin{aligned} 0 &=x y^{\prime \prime}+2 y^{\prime}+x y \

    \[4pt] &=x \sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-2}+2 \sum_{n=1}^{\infty} n c_{n} x^{n-1}+x \sum_{n=0}^{\infty} c_{n} x^{n} \

    \[4pt] &=\sum_{n=2}^{\infty} n(n-1) c_{n} x^{n-1}+\sum_{n=1}^{\infty} 2 n c_{n} x^{n-1}+\sum_{n=0}^{\infty} c_{n} x^{n+1} \

    \[4pt] &=2 c_{1}+\sum_{n=2}^{\infty}[n(n-1)+2 n] c_{n} x^{n-1}+\sum_{n=0}^{\infty} c_{n} x^{n+1} \end{aligned} \label{4.26} \]

    Here we combined the first two series and pulled out the first term of the second series.

    We can re-index the series. In the first series we let \(k=n-1\) and in the second series we let \(k=n+1\). This gives

    \[ \begin{aligned} 0 &=2 c_{1}+\sum_{n=2}^{\infty} n(n+1) c_{n} x^{n-1}+\sum_{n=0}^{\infty} c_{n} x^{n+1} \

    \[4pt] &=2 c_{1}+\sum_{k=1}^{\infty}(k+1)(k+2) c_{k+1} x^{k}+\sum_{k=1}^{\infty} c_{k-1} x^{k} \

    \[4pt] &=2 c_{1}+\sum_{k=1}^{\infty}\left[(k+1)(k+2) c_{k+1}+c_{k-1}\right] x^{k} \end{aligned} \end{equation}\label{4.27} \]

    Setting coefficients to zero, we have \(c_{1}=0\) and

    \[c_{k+1}=-\dfrac{1}{(k+12)(k+1)} c_{k-1}, \quad k=1,2, \ldots \nonumber \]

    Therefore, we have \(c_{n}=0\) for \(n=1,3,5, \ldots\). For the even indices, we have

    \[ \begin{array}{ll} k=1: & c_{2}=-\dfrac{1}{3(2)} c_{0}=-\dfrac{c_{0}}{3 !} \

    \[4pt] k=3: & c_{4}=-\dfrac{1}{5(4)} c_{2}=\dfrac{c_{0}}{5 !} \

    \[4pt] k= & 5: \

    \[4pt] k= & c_{6}=-\dfrac{1}{7(6)} c_{4}=-\dfrac{c_{0}}{7 !} \

    \[4pt] & c_{8}=-\dfrac{1}{9(8)} c_{6}=\dfrac{c_{0}}{9 !} \end{array} \label{4.28} \]

    We can see the pattern and write the solution in closed form.

    \[ \begin{aligned} y(x) &=\sum_{n=0}^{\infty} c_{n} x^{n} \

    \[4pt] &=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\ldots \

    \[4pt] &=c_{0}\left(1-\dfrac{x^{2}}{3 !}+\dfrac{x^{4}}{5 !}-\dfrac{x^{6}}{7 !}+\dfrac{x^{8}}{9 !} \cdots\right) \

    \[4pt] &=c_{0} \dfrac{1}{x}\left(x-\dfrac{x^{3}}{3 !}+\dfrac{x^{5}}{5 !}-\dfrac{x^{7}}{7 !}+\dfrac{x^{9}}{9 !} \cdots\right) \

    \[4pt] &=c_{0} \dfrac{\sin x}{x} \end{aligned} \label{4.29} \]

    We have another case where the power series method does not yield a general solution.

    In the last example we did not find the general solution. However, we did find one solution, \(y_{1}(x)=\dfrac{\sin x}{x} .\) So, we could use the Method of Reduction to obtain a second linearly independent of Order to obtain the second linearly independent solution. (Use of the Method of Reduction of Order). This is carried solution. See Section 2.2.1 out in the next example.

    Example \(\PageIndex{3}\)

    Let \(y_{1}(x)=\dfrac{\sin x}{x}\) be one solution of \(x y^{\prime \prime}+2 y^{\prime}+x y=0\). Find a second linearly independent solution.

    Solution

    Let \(y(x) = v(x)y_1(x).\) Inserting this into the differential equation, we have

    \[ \begin{aligned} & 0=x y^{\prime \prime}+2 y^{\prime}+x y \

    \[4pt] & =x\left(v y_{1}\right)^{\prime \prime}+2\left(v y_{1}\right)^{\prime}+x v y_{1} \

    \[4pt] & =x\left(v^{\prime} y_{1}+v y_{1}^{\prime}\right)^{\prime}+2\left(v^{\prime} y_{1}+v y_{1}^{\prime}\right)+x v y_{1} \

    \[4pt] & =x\left(v^{\prime \prime} y_{1}+2 v^{\prime} y_{1}^{\prime}+v y_{1}^{\prime \prime}\right)+2\left(v^{\prime} y_{1}+v y_{1}^{\prime}\right)+x v y_{1} \

    \[4pt] & =x\left(v^{\prime \prime} y_{1}+2 v^{\prime} y_{1}^{\prime}\right)+2 v^{\prime} y_{1}+v\left(x y_{1}^{\prime \prime}+2 y_{1}^{\prime}+x y_{1}\right) \

    \[4pt] & =x\left[\dfrac{\sin x}{x} v^{\prime \prime}+2\left(\dfrac{\cos (x)}{x}-\dfrac{\sin (x)}{x^{2}}\right) v^{\prime}\right]+2 \dfrac{\sin x}{x} v^{\prime} \

    \[4pt] & =\sin x v^{\prime \prime}+2 \cos x v^{\prime} . \end{aligned} \label{4.30} \]

    This is a first order separable differential equation for \(z=v^{\prime} .\) Thus,

    \[\sin x \dfrac{d z}{d x}=-2 z \cos x \nonumber \]

    Or

    \[\dfrac{d z}{z}=-2 \cot x d x \nonumber \]

    Integrating, we have

    \[\ln |z|=2 \ln |\csc x|+C \nonumber \]

    Setting \(C=0\), we have \(v^{\prime}=z=\csc ^{2} x\), or \(v=-\cot x .\) This gives the second solution as

    \[y(x)=v(x) y_{1}(x)=-\cot x \dfrac{\sin x}{x}=-\dfrac{\cos x}{x}.\nonumber \]


    This page titled 4.3: Singular Points is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.