# 6.3: Series Solutions and Convergence

In the last section, we saw how to find series solutions to second order linear differential equations. We did not investigate the convergence of these series. In this discussion, we will derive an alternate method to find series solutions. We will also learn how to determine the radius of convergence of the solutions just by taking a quick glance of the differential equation.

Example 1

Consider the differential equation

\[ y'' + y' + ty = 0 \]

As before we seek a series solution

\[ y = a_0 + a_1t + a_2t^2 + a_3t^3 + a_4t^4 + ... \]

The theory for Taylor Series states that

\[ n!\; a_n = y^{(n)}(0) \]

We have

\[ y'' = -y' -ty \]

Plugging in 0 gives

\[ 2!\, a_2 = y''(0) = -y'(0) + 0 = -a1 \]

\[ a_2 = -\dfrac{a_1}{2} \]

Taking the derivative of the differential equation gives

\[ (y'' + y' + ty)' = y''' + y'' + ty' + y = 0 \]

or

\[ y''' = -y'' - ty' - y\]

Plugging in zero gives

\[ 3!\, a_3 = a_1 - a_0 \]

\[ a_3 = \dfrac{a_1}{6} - \dfrac{a_0}{6}\]

Taking another derivative gives

\[ (y''' + y'' + ty' + y)' = y^{(iv)} + y''' + ty'' + 2y' = 0 \]

or

\[ y^{(iv)} = -y''' - ty'' - 2y' \]

Plugging in zero gives

\[ 4! \,a_4 = -a_1 + a_0 - 2a_1 \]

\[ a_4 = -\dfrac{49}{24} a_1 + \dfrac{a_0}{24}\]

The important thing to note here is that all of the coefficients can be written in terms of the first two. To come up with a theorem regarding this, we first need a definition.

Definition: Analytic Function

A function \(f(x)\) is called *analytic* at \(x_0\) if \(f(x)\) is equal to its power series.

It turns out that if \(p(x)\) and \(q(x)\) are analytic then there always exists a power series solution to the corresponding differential equation. We state this fact below without proof. If \(x_0\) is a point such that \(p(x)\) and \(p(x)\) are analytic, then \(x_0\) is called an *ordinary point* of the differential equation.

Theorem

Let \(x_0\) be an ordinary point of the differential equation \[ L(y) = y'' + p(t)y' + q(t)y = 0 \]

Then the general solution can be represented by the power series \[ y= \sum_{n=0}^\infty a_n(x-x_0)^n = a_0\,y_1(x) + a_1\,y_2(x) \]

where \(a_0\) and \(a_1\) are arbitrary constants and \(y_1\) and \(y_2\) are analytic at \(x_0\). The radii of convergence for \(y_1\) and \(y_2\) are at least as large as the minimum radii of convergences for \(p\) and \(q\).

**Remark**: The easiest way of find the radii of convergence of most functions us by using the following fact

If \(f(x)\) is an analytic function for all \(x\), then the radius of convergence for \(1/f(x)\) is the distance from the center of convergence to the closest root (possibly complex) of \(f(x)\).

Example 2

Find a lower bound for the radius of convergence of series solutions about \(x = 1\) for the differential equation

\[ (x^2 + 4)\; y'' + \text{sin} \; (x)y' + e^xy = 0 \]

**Solution**

We have

\[ p(x) = \dfrac{\sin x}{x^2 + 4} \]

\[ q(x) = \dfrac{e^x}{x^2 + 4} \]

Both of these are quotient of analytic functions. The roots of \(x^2 + 4\) are

\( 2i \) and \(-2i\)

The distance from \(1\) to \(2i\) is the same as the distance from \((1,0)\) to \((0,2)\) which is \( \sqrt{5} \). We get the same distance from \(1\) to \(-2i\). Hence the radii of convergence of the solutions are both at least \( \sqrt{5} \).

### Contributors

- Larry Green (Lake Tahoe Community College)