# 12.2: Second Order Linear Differential Equations

- Page ID
- 90995

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Second order differential equations are typically harder than first order. In most cases students are only exposed to second order linear differential equations. A general form for a* second order linear differential equation* is given by \[a(x) y^{\prime \prime}(x)+b(x) y^{\prime}(x)+c(x) y(x)=f(x) .\label{eq:1}\]

One can rewrite this equation using operator terminology. Namely, one first defines the differential operator \(L=a(x) D^{2}+b(x) D+c(x)\), where \(D=\frac{d}{d x}\). Then equation \(\eqref{eq:1}\) becomes \[L y=f .\label{eq:2}\]

The solutions of linear differential equations are found by making use of the linearity of \(L\). Namely, we consider the *vector space*\(^{1}\) consisting of real-valued functions over some domain. Let \(f\) and \(g\) be vectors in this function space. \(L\) is a* linear operator *if for two vectors \(f\) and \(g\) and scalar \(a\), we have

- \(L(f+g)=L f+L g\)
- \(L(a f)=a L f\).

One typically solves \(\eqref{eq:1}\) by finding the general solution of the homogeneous problem, \[L y_{h}=0\nonumber \] and a particular solution of the nonhomogeneous problem, \[L y_{p}=f .\nonumber \] Then, the general solution of \(\eqref{eq:1}\) is simply given as \(y=y_{h}+y_{p}\). This is true because of the linearity of \(L\). Namely, \[\begin{align} L y &=L\left(y_{h}+y_{p}\right)\nonumber \\ &=L y_{h}+L y_{p}\nonumber \\ &=0+f=f .\label{eq:3} \end{align}\]

There are methods for finding a particular solution of a nonhomogeneous differential equation. These methods range from pure guessing, the Method of Undetermined Coefficients, the Method of Variation of Parameters, or Green’s functions. We will review these methods later in the chapter.

Determining solutions to the homogeneous problem, \(L y_{h}=0\), is not always easy. However, many now famous mathematicians and physicists have studied a variety of second order linear equations and they have saved us the trouble of finding solutions to the differential equations that often appear in applications. We will encounter many of these in the following chapters. We will first begin with some simple homogeneous linear differential equations.

Linearity is also useful in producing the general solution of a homogeneous linear differential equation. If \(y_{1}\) and \(y_{2}\) are solutions of the homogeneous equation, then the* linear combination* \(y=c_{1} y_{1}+c_{2} y_{2}\) is also a solution of the homogeneous equation. In fact, if \(y_{1}\) and \(y_{2}\) are *linearly independent*,\(^{2}\) then \(y=c_{1} y_{1}+c_{2} y_{2}\) is the general solution of the homogeneous problem.

A set of functions \(\left\{y_{i}(x)\right\}_{i=1}^{n}\) is a linearly independent set if and only if \[c_{1} y_{1}(x)+\ldots+c_{n} y_{n}(x)=0\nonumber \] implies \(c_{i}=0\), for \(i=1, \ldots, n\).

For \(n=2, c_{1} y_{1}(x)+c_{2} y_{2}(x)=0\). If \(y_{1}\) and \(y_{2}\) are linearly dependent, then the coefficients are not zero and \(y_{2}(x)=-\frac{c_{1}}{c_{2}} y_{1}(x)\) and is a multiple of \(y_{1}(x)\).

Linear independence can also be established by looking at the Wronskian of the solutions. For a second order differential equation the Wronskian is defined as \[W\left(y_{1}, y_{2}\right)=y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x) .\label{eq:4}\] The solutions are linearly independent if the Wronskian is not zero.

## Constant Coefficient Equations

The simplest second order differential equations are those with constant coefficients. The general form for a homogeneous constant coefficient second order linear differential equation is given as \[a y^{\prime \prime}(x)+b y^{\prime}(x)+c y(x)=0,\label{eq:5}\] where \(a, b\), and \(c\) are constants.

Solutions to \(\eqref{eq:5}\) are obtained by making a guess of \(y(x)=e^{r x}\). Inserting this guess into \(\eqref{eq:5}\) leads to the characteristic equation \[a r^{2}+b r+c=0 \text {. }\label{eq:6}\]

Namely, we compute the derivatives of \(y(x)=e^{r x}\), to get \(y(x)=r e^{r x}\), and \(y(x)=r^{2} e^{r x}\). Inserting into \(\eqref{eq:5}\), we have \[0=a y^{\prime \prime}(x)+b y^{\prime}(x)+c y(x)=\left(a r^{2}+b r+c\right) e^{r x} .\nonumber \] Since the exponential is never zero, we find that \(a r^{2}+b r+c=0\).

The characteristic equation for \(a y^{\prime \prime}+b y^{\prime}+c y=0\) is \(a r^{2}+b r+c=0\). Solutions of this quadratic equation lead to solutions of the differential equation.

Two real, distinct roots, \(r_{1}\) and \(r_{2}\), give solutions of the form \[y(x)=c_{1} e^{r_{1} x}+c_{2} e^{r_{2} x}.\nonumber \]

The roots of this equation, \(r_{1}, r_{2}\), in turn lead to three types of solutions depending upon the nature of the roots. In general, we have two linearly independent solutions, \(y_{1}(x)=e^{r_{1} x}\) and \(y_{2}(x)=e^{r_{2} x}\), and the general solution is given by a linear combination of these solutions, \[y(x)=c_{1} e^{r_{1} x}+c_{2} e^{r_{2} x} .\nonumber \] For two real distinct roots, we are done. However, when the roots are real, but equal, or complex conjugate roots, we need to do a little more work to obtain usable solutions.

\(y^{\prime \prime}-y^{\prime}-6 y=0 y(0)=2, y^{\prime}(0)=0\).

###### Solution

The characteristic equation for this problem is \(r^{2}-r-6=0\). The roots of this equation are found as \(r=-2,3\). Therefore, the general solution can be quickly written down: \[y(x)=c_{1} e^{-2 x}+c_{2} e^{3 x} .\nonumber \]

Note that there are two arbitrary constants in the general solution. Therefore, one needs two pieces of information to find a particular solution. Of course, we have the needed information in the form of the initial conditions.

One also needs to evaluate the first derivative \[y^{\prime}(x)=-2 c_{1} e^{-2 x}+3 c_{2} e^{3 x}\nonumber \] in order to attempt to satisfy the initial conditions. Evaluating \(y\) and \(y^{\prime}\) at \(x=0\) yields \[\begin{align} &2=c_{1}+c_{2}\nonumber \\ &0=-2 c_{1}+3 c_{2}\label{eq:7} \end{align}\] These two equations in two unknowns can readily be solved to give \(c_{1}=6 / 5\) and \(c_{2}=4 / 5\). Therefore, the solution of the initial value problem is obtained as \(y(x)=\frac{6}{5} e^{-2 x}+\frac{4}{5} e^{3 x}\).

Repeated roots, \(r_{1}=r_{2}=r\), give solutions of the form \[y(x)=\left(c_{1}+c_{2} x\right) e^{r x} .\nonumber \]

In the case when there is a repeated real root, one has only one solution, \(y_{1}(x)=e^{r x}\). The question is how does one obtain the second linearly independent solution? Since the solutions should be independent, we must have that the ratio \(y_{2}(x) / y_{1}(x)\) is not a constant. So, we guess the form \(y_{2}(x)=v(x) y_{1}(x)=v(x) e^{r x}\). (This process is called the Method of Reduction of Order.)

For constant coefficient second order equations, we can write the equation as \[(D-r)^{2} y=0 \text {, }\nonumber \] where \(D=\frac{d}{d x}\). We now insert \(y_{2}(x)=v(x) e^{r x}\) into this equation. First we compute \[(D-r) v e^{r x}=v^{\prime} e^{r x} .\nonumber \] Then, \[0=(D-r)^{2} v e^{r x}=(D-r) v^{\prime} e^{r x}=v^{\prime \prime} e^{r x} .\nonumber \] So, if \(y_{2}(x)\) is to be a solution to the differential equation, then \(v^{\prime \prime}(x) e^{r x}=0\) for all \(x\). So, \(v^{\prime \prime}(x)=0\), which implies that \[v(x)=a x+b .\nonumber \] So, \[y_{2}(x)=(a x+b) e^{r x} .\nonumber \] Without loss of generality, we can take \(b=0\) and \(a=1\) to obtain the second linearly independent solution, \(y_{2}(x)=x e^{r x}\). The general solution is then \[y(x)=c_{1} e^{r x}+c_{2} x e^{r x} .\nonumber \]

\(y^{\prime \prime}+6 y^{\prime}+9 y=0\).

###### Solution

In this example we have \(r^{2}+6 r+9=0\). There is only one root, \(r=-3\). From the above discussion, we easily find the solution \(y(x)=\left(c_{1}+c_{2} x\right) e^{-3 x}\).

When one has complex roots in the solution of constant coefficient equations, one needs to look at the solutions \[y_{1,2}(x)=e^{(\alpha \pm i \beta) x} .\nonumber \] We make use of Euler’s formula (See Chapter 6 for more on complex variables) \[e^{i \beta x}=\cos \beta x+i \sin \beta x\label{eq:8}\] Then, the linear combination of \(y_{1}(x)\) and \(y_{2}(x)\) becomes \[\begin{align} A e^{(\alpha+i \beta) x}+B e^{(\alpha-i \beta) x} &=e^{\alpha x}\left[A e^{i \beta x}+B e^{-i \beta x}\right]\nonumber \\ &=e^{\alpha x}[(A+B) \cos \beta x+i(A-B) \sin \beta x]\nonumber \\ & \equiv e^{\alpha x}\left(c_{1} \cos \beta x+c_{2} \sin \beta x\right)\label{eq:9} \end{align}\] Thus, we see that we have a linear combination of two real, linearly independent solutions, \(e^{\alpha x} \cos \beta x\) and \(e^{\alpha x} \sin \beta x\).

Complex roots, \(r=\alpha \pm i \beta\), give solutions of the form \[y(x)=e^{\alpha x}\left(c_{1} \cos \beta x+c_{2} \sin \beta x\right) .\nonumber \]

\(y^{\prime \prime}+4 y=0\).

###### Solution

The characteristic equation in this case is \(r^{2}+4=0\). The roots are pure imaginary roots, \(r=\pm 2 i\), and the general solution consists purely of sinusoidal functions, \(y(x)=c_{1} \cos (2 x)+c_{2} \sin (2 x)\), since \(\alpha=0\) and \(\beta=2\).

\(y^{\prime \prime}+2 y^{\prime}+4 y=0\).

###### Solution

The characteristic equation in this case is \(r^{2}+2 r+4=0\). The roots are complex, \(r=-1 \pm \sqrt{3} i\) and the general solution can be written as \[y(x)=\left[c_{1} \cos (\sqrt{3} x)+c_{2} \sin (\sqrt{3} x)\right] e^{-x} .\nonumber \]

\(y''+4y=\sin x\).

###### Solution

This is an example of a nonhomogeneous problem. The homogeneous problem was actually solved in Example \(\PageIndex{3}\). According to the theory, we need only seek a particular solution to the nonhomogeneous problem and add it to the solution of the last example to get the general solution.

The particular solution can be obtained by purely guessing, making an educated guess, or using the Method of Variation of Parameters. We will not review all of these techniques at this time. Due to the simple form of the driving term, we will make an intelligent guess of \(y_{p}(x)=A \sin x\) and determine what \(A\) needs to be. Inserting this guess into the differential equation gives \((-A+4 A) \sin x=\sin x\). So, we see that \(A=1 / 3\) works. The general solution of the nonhomogeneous problem is therefore \(y(x)=c_{1} \cos (2 x)+c_{2} \sin (2 x)+\frac{1}{3} \sin x\).

The three cases for constant coefficient linear second order differential equations are summarized below.

**Real, distinct roots**\(r_{1}, r_{2}\). In this case the solutions corresponding to each root are linearly independent. Therefore, the general solution is simply \(y(x)=c_{1} e^{r_{1} x}+c_{2} e^{r_{2} x}\).**Real, equal roots**\(r_{1}=r_{2}=r\). In this case the solutions corresponding to each root are linearly dependent. To find a second linearly independent solution, one uses the Method of Reduction of Order. This gives the second solution as \(x e^{r x}\). Therefore, the general solution is found as \(y(x)=\left(c_{1}+c_{2} x\right) e^{r x}\).**Complex conjugate roots**\(r_{1}, r_{2}=\alpha \pm i \beta\). In this case the solutions corresponding to each root are linearly independent. Making use of Euler’s identity, \(e^{i \theta}=\cos (\theta)+i \sin (\theta)\), these complex exponentials can be rewritten in terms of trigonometric functions. Namely, one has that \(e^{\alpha x} \cos (\beta x)\) and \(e^{\alpha x} \sin (\beta x)\) are two linearly independent solutions. Therefore, the general solution becomes \(y(x)=e^{\alpha x}(c_1\cos (\beta x)+c_2\sin (\beta x))\).