Skip to main content
Mathematics LibreTexts

2.1: Linear Second Order Homogeneous Equations

  • Page ID
    17143
  • This page is a draft and is under active development. 

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    A second order differential equation is said to be linear if it can be written as

    \begin{equation}\label{eq:2.1.1}
    y''+p(x)y'+q(x)y=f(x).
    \end{equation}

    We call the function \(f\) on the right a forcing function, since in physical applications it's often related to a force acting on some system modeled by the differential equation. We say that \eqref{eq:2.1.1} is \( \textcolor{blue}{\mbox{homogeneous}} \) if \(f \equiv 0\) or \( \textcolor{blue}{\mbox{nonhomogeneous}} \) if \(f\not\equiv 0\). Since these definitions are like the corresponding definitions in 3.3: First order linear equations for the linear first order equation

    \begin{equation}\label{eq:2.1.2}
    y'+p(x)y=f(x),
    \end{equation}

    it's natural to expect similarities between methods of solving \eqref{eq:2.1.1} and \eqref{eq:2.1.2}. However, solving \eqref{eq:2.1.1} is more difficult than solving \eqref{eq:2.1.2}. For example, while Theorem \((2.1.1)\) gives a formula for the general solution of \eqref{eq:2.1.2} in the case where \(f\equiv0\) and Theorem 2.2.2 gives a formula for the case where \(f\not\equiv0\), there are no formulas for the general solution of \eqref{eq:2.1.1} in either case. Therefore we must be content to solve linear second order equations of special forms.

    In Section 2.1 we considered the homogeneous equation \(y'+p(x)y=0\) first, and then used a nontrivial solution of this equation to find the general solution of the nonhomogeneous equation \(y'+p(x)y=f(x)\). Although the progression from the homogeneous to the nonhomogeneous case isn't that simple for the linear second order
    equation, it's still necessary to solve the homogeneous equation

    \begin{equation}\label{eq:2.1.3}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    in order to solve the nonhomogeneous equation \eqref{eq:2.1.1}. This section is devoted to \eqref{eq:2.1.3}.

    The next theorem gives sufficient conditions for existence and uniqueness of solutions of initial value problems for \eqref{eq:5.1.3}. We omit the proof.

    Theorem \(\PageIndex{1}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b),\) let \(x_0\) be any point in \((a,b),\) and let \(k_0\) and \(k_1\) be arbitrary real numbers\(.\) Then the initial value problem

    \begin{eqnarray*}
    y''+p(x)y'+q(x)y=0,\ y(x_0)=k_0,\ y'(x_0)=k_1
    \end{eqnarray*}

    has a unique solution on \((a,b).\)

    Proof

    Since \(y\equiv0\) is obviously a solution of \eqref{eq:2.1.3} we call it the \( \textcolor{blue}{\mbox{trivial}} \) solution. Any other solution is \( \textcolor{blue}{\mbox{nontrivial}} \). Under the assumptions of Theorem \((2.1.1)\), the only solution of the initial value problem

    \begin{eqnarray*}
    y''+p(x)y'+q(x)y=0,\ y(x_0)=0,\ y'(x_0)=0
    \end{eqnarray*}

    on \((a,b)\) is the trivial solution. (Exercise \((2.1E.24)\)).

    The next three examples illustrate concepts that we'll develop later in this section. You shouldn't be concerned with how to \( \textcolor{blue}{\mbox{find}} \) the given solutions of the equations in these examples. This will be explained in later sections.

    Example \(\PageIndex{1}\)

    The coefficients of \(y'\) and \(y\) in

    \begin{equation}\label{eq:2.1.4}
    y''-y=0
    \end{equation}

    are the constant functions \(p\equiv0\) and \(q\equiv-1\), which are continuous on \((-\infty,\infty)\). Therefore Theorem \((2.1.1)\) implies that every initial value problem for \eqref{eq:2.1.4} has a unique solution on \((-\infty,\infty)\).

    (a) Verify that \(y_1=e^x\) and \(y_2=e^{-x}\) are solutions of \eqref{eq:2.1.4} on \((-\infty,\infty)\).

    (b) Verify that if \(c_1\) and \(c_2\) are arbitrary constants, \(y=c_1e^x+c_2e^{-x}\) is a solution of \eqref{eq:2.1.4} on \((-\infty,\infty)\).

    (c) Solve the initial value problem

    \begin{equation}\label{eq:2.1.5}
    y''-y=0,\quad y(0)=1,\quad y'(0)=3.
    \end{equation}

    Answer

    (a) If \(y_1=e^x\) then \(y_1'=e^x\) and \(y_1''=e^x=y_1\), so \(y_1''-y_1=0\). If \(y_2=e^{-x}\), then \(y_2'=-e^{-x}\) and \(y_2''=e^{-x}=y_2\), so \(y_2''-y_2=0\).

    (b) If

    \begin{equation}\label{eq:2.1.6}
    y=c_1e^x+c_2e^{-x}
    \end{equation}

    then

    \begin{equation}\label{eq:2.1.7}
    y'=c_1e^x-c_2e^{-x}
    \end{equation}

    and

    \begin{eqnarray*}
    y''=c_1e^x+c_2e^{-x},
    \end{eqnarray*}

    so

    \begin{eqnarray*}
    y''-y&=&(c_1e^x+c_2e^{-x})-(c_1e^x+c_2e^{-x})\\
    &=&c_1(e^x-e^x)+c_2(e^{-x}-e^{-x})=0
    \end{eqnarray*}

    for all \(x\). Therefore \(y=c_1e^x+c_2e^{-x}\) is a solution of \eqref{eq:2.1.4} on \((-\infty,\infty)\).

    (c) We can solve \eqref{eq:2.1.5} by choosing \(c_1\) and \(c_2\) in \eqref{eq:2.1.6} so that \(y(0)=1\) and \(y'(0)=3\). Setting \(x=0\) in \eqref{eq:2.1.6} and \eqref{eq:2.1.7} shows that this is equivalent to

    \begin{eqnarray*}
    c_1+c_2&=&1\\
    c_1-c_2&=&3.
    \end{eqnarray*}

    Solving these equations yields \(c_1=2\) and \(c_2=-1\). Therefore \(y=2e^x-e^{-x}\) is the unique solution of \eqref{eq:2.1.5} on \((-\infty,\infty)\).

    Example \(\PageIndex{2}\)

    Let \(\omega\) be a positive constant. The coefficients of \(y'\) and \(y\) in

    \begin{equation}\label{eq:2.1.8}
    y''+\omega^2y=0
    \end{equation}

    are the constant functions \(p\equiv0\) and \(q\equiv\omega^2\), which are continuous on \((-\infty,\infty)\). Therefore Theorem \((2.1.1)\) implies that every initial value problem for \eqref{eq:2.1.8} has a unique solution on \((-\infty,\infty)\).

    (a) Verify that \(y_1=\cos\omega x\) and \(y_2=\sin\omega x\) are solutions of \eqref{eq:2.1.8} on \((-\infty,\infty)\).

    (b) Verify that if \(c_1\) and \(c_2\) are arbitrary constants then \(y=c_1\cos\omega x+c_2\sin\omega x\) is a solution of \eqref{eq:2.1.8} on \((-\infty,\infty)\).

    (c) Solve the initial value problem

    \begin{equation}\label{eq:2.1.9}
    y''+\omega^2y=0,\quad y(0)=1,\quad y'(0)=3.
    \end{equation}

    Answer

    (a) If \(y_1=\cos\omega x\) then \(y_1'=-\omega\sin\omega x\) and \(y_1''=-\omega^2\cos\omega x=-\omega^2y_1\), so \(y_1''+\omega^2y_1=0\). If \(y_2=\sin\omega x\) then, \(y_2'=\omega\cos\omega x\) and \(y_2''=-\omega^2\sin\omega x=-\omega^2y_2\), so \(y_2''+\omega^2y_2=0\).

    (b) If

    \begin{equation}\label{eq:2.1.10}
    y=c_1\cos\omega x+c_2\sin\omega x
    \end{equation}

    then

    \begin{equation}\label{eq:2.1.11}
    y'=\omega(-c_1\sin\omega x+c_2\cos\omega x)
    \end{equation}

    and

    \begin{eqnarray*}
    y''=-\omega^2(c_1\cos\omega x+c_2\sin\omega x),
    \end{eqnarray*}

    so

    \begin{eqnarray*}
    y''+\omega^2y&=& -\omega^2(c_1\cos\omega x+c_2\sin\omega x)
    +\omega^2(c_1\cos\omega x+c_2\sin\omega x)\\
    &=&c_1\omega^2(-\cos\omega x+\cos\omega x)+
    c_2\omega^2(-\sin\omega x+\sin\omega x)=0
    \end{eqnarray*}

    for all \(x\). Therefore \(y=c_1\cos\omega x+c_2\sin\omega x\) is a solution of \eqref{eq:2.1.8} on \((-\infty,\infty)\).

    (c) To solve \eqref{eq:2.1.9}, we must choosing \(c_1\) and \(c_2\) in \eqref{eq:2.1.10} so that \(y(0)=1\) and \(y'(0)=3\). Setting \(x=0\) in \eqref{eq:2.1.10} and \eqref{eq:2.1.11} shows that \(c_1=1\) and \(c_2=3/\omega\). Therefore

    \begin{eqnarray*}
    y=\cos\omega x+{3\over\omega}\sin\omega x
    \end{eqnarray*}

    is the unique solution of \eqref{eq:2.1.9} on \((-\infty,\infty)\).

    Theorem \((2.1.1)\) implies that if \(k_0\) and \(k_1\) are arbitrary real numbers then the initial value problem

    \begin{equation}\label{eq:2.1.12}
    P_0(x)y''+P_1(x)y'+P_2(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1
    \end{equation}

    has a unique solution on an interval \((a,b)\) that contains \(x_0\), provided that \(P_0\), \(P_1\), and \(P_2\) are continuous and \(P_0\) has no zeros on \((a,b)\). To see this, we rewrite the differential equation in \eqref{eq:2.1.12} as

    \begin{eqnarray*}
    y''+{P_1(x)\over P_0(x)}y'+{P_2(x)\over P_0(x)}y=0
    \end{eqnarray*}

    and apply Theorem \((2.1.1)\) with \(p=P_1/P_0\) and \(q=P_2/P_0\).

    Example \(\PageIndex{3}\)

    The equation

    \begin{equation}\label{eq:2.1.13}
    x^2y''+xy'-4y=0
    \end{equation}

    has the form of the differential equation in \eqref{eq:2.1.12}, with \(P_0(x)=x^2\), \(P_1(x)=x\), and \(P_2(x)=-4\), which are are all continuous on \((-\infty,\infty)\). However, since \(P(0)=0\) we must consider solutions of \eqref{eq:2.1.13} on \((-\infty,0)\) and \((0,\infty)\). Since \(P_0\) has no zeros on these intervals, Theorem \((2.1.1)\) implies that the initial value problem

    \begin{eqnarray*}
    x^2y''+xy'-4y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1
    \end{eqnarray*}

    has a unique solution on \((0,\infty)\) if \(x_0>0\), or on \((-\infty,0)\) if \(x_0<0\).

    (a) Verify that \(y_1=x^2\) is a solution of \eqref{eq:2.1.13} on \((-\infty,\infty)\) and \(y_2=1/x^2\) is a solution of \eqref{eq:2.1.13} on \((-\infty,0)\) and \((0,\infty)\).

    (b) Verify that if \(c_1\) and \(c_2\) are any constants then \(y=c_1x^2+c_2/x^2\) is a solution of \eqref{eq:2.1.13} on \((-\infty,0)\) and \((0,\infty)\).

    (c) Solve the initial value problem

    \begin{equation}\label{eq:2.1.14}
    x^2y''+xy'-4y=0,\quad y(1)=2,\quad y'(1)=0.
    \end{equation}

    (d) Solve the initial value problem

    \begin{equation}\label{eq:2.1.15}
    x^2y''+xy'-4y=0,\quad y(-1)=2,\quad y'(-1)=0.
    \end{equation}

    Answer

    (a) If \(y_1=x^2\) then \(y_1'=2x\) and \(y_1''=2\), so

    \begin{eqnarray*}
    x^2y_1''+xy_1'-4y_1=x^2(2)+x(2x)-4x^2=0
    \end{eqnarray*}

    for \(x\) in \((-\infty,\infty)\). If \(y_2=1/x^2\), then \(y_2'=-2/x^3\) and \(y_2''=6/x^4\), so

    \begin{eqnarray*}
    x^2y_2''+xy_2'-4y_2=x^2\left(6\over x^4\right)-x\left(2\over x^3\right)-{4\over x^2}=0
    \end{eqnarray*}

    for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    (b) If

    \begin{equation}\label{eq:2.1.16}
    y=c_1x^2+{c_2\over x^2}
    \end{equation}

    then

    \begin{equation}\label{eq:2.1.17}
    y'=2c_1x-{2c_2\over x^3}
    \end{equation}

    and

    \begin{eqnarray*}
    y''=2c_1+{6c_2\over x^4},
    \end{eqnarray*}

    so

    \begin{eqnarray*}
    x^2y''+xy'-4y&=&x^2\displaystyle{\left(2c_1+{6c_2\over x^4}\right)} +x\displaystyle{\left(2c_1x-{2c_2\over x^3}\right)} -4\displaystyle{\left(c_1x^2+{c_2\over x^2}\right)}\\
    &=&c_1(2x^2+2x^2-4x^2) +c_2\displaystyle{\left({6\over x^2}-{2\over x^2}-{4\over x^2}\right)}\\
    &=&c_1\cdot0+c_2\cdot0=0
    \end{eqnarray*}

    for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    (c) To solve \eqref{eq:2.1.14}, we choose \(c_1\) and \(c_2\) in \eqref{eq:2.1.16} so that \(y(1)=2\) and \(y'(1)=0\). Setting \(x=1\) in \eqref{eq:2.1.16} and \eqref{eq:2.1.17} shows that this is equivalent to

    \begin{eqnarray*}
    \phantom{2}c_1+\phantom{2}c_2&=&2\\
    2c_1-2c_2&=&0.
    \end{eqnarray*}

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of \eqref{eq:2.1.14} on \((0,\infty)\).

    (d) We can solve \eqref{eq:2.1.15} by choosing \(c_1\) and \(c_2\) in \eqref{eq:2.1.16} so that \(y(-1)=2\) and \(y'(-1)=0\). Setting \(x=-1\) in \eqref{eq:2.1.16} and \eqref{eq:2.1.17} shows that this is equivalent to

    \begin{eqnarray*}
    \phantom{-2}c_1+\phantom{2}c_2&=&2\\
    -2c_1+2c_2&=&0.
    \end{eqnarray*}

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of \eqref{eq:2.1.15} on \((-\infty,0)\).

    Although the \( \textcolor{blue}{\mbox{formulas}} \) for the solutions of \eqref{eq:2.1.14} and \eqref{eq:2.1.15} are both \(y=x^2+1/x^2\), you should not conclude that these two initial value problems have the same solution. Remember that a solution of an initial value problem is defined \( \textcolor{blue}{\mbox{on an interval that contains the initial point}} \); therefore, the solution of \eqref{eq:2.1.14} is \(y=x^2+1/x^2\) \( \textcolor{blue}{\mbox{on the interval}} \) \((0,\infty)\), which contains the initial point \(x_0=1\), while the solution of \eqref{eq:2.1.15} is \(y=x^2+1/x^2\) \( \textcolor{blue}{\mbox{on the interval}} \) \((-\infty,0)\), which contains the initial point \(x_0=-1\).

    The General Solution of a Homogeneous Linear Second Order Equation

    If \(y_1\) and \(y_2\) are defined on an interval \((a,b)\) and \(c_1\) and \(c_2\) are constants, then

    \begin{eqnarray*}
    y=c_1y_1+c_2y_2
    \end{eqnarray*}

    is a \( \textcolor{blue}{\mbox{linear combination of \(y_1\) and \(y_2\)}} \). For example, \(y=2\cos x+7 \sin x\) is a linear combination of \(y_1= \cos x\) and \(y_2=\sin x\), with \(c_1=2\) and \(c_2=7\).

    The next theorem states a fact that we've already verified in Examples \((2.1.1)\), \((2.1.2)\), and \((2.1.3)\).

    Theorem \(\PageIndex{2}\)

    If \(y_1\) and \(y_2\) are solutions of the homogeneous equation

    \begin{equation}\label{eq:2.1.18}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    on \((a,b),\) then any linear combination

    \begin{equation}\label{eq:2.1.19}
    y=c_1y_1+c_2y_2
    \end{equation}

    of \(y_1\) and \(y_2\) is also a solution of \eqref{eq:2.1.18} on \((a,b).\)

    Proof

    If

    \begin{eqnarray*}
    y=c_1y_1+c_2y_2
    \end{eqnarray*}

    then

    \begin{eqnarray*}
    y'=c_1y_1'+c_2y_2'\mbox{ and } y''=c_1y_1''+c_2y_2''.
    \end{eqnarray*}

    Therefore

    \begin{eqnarray*}
    y''+p(x)y'+q(x)y&=&(c_1y_1''+c_2y_2'')+p(x)(c_1y_1'+c_2y_2') +q(x)(c_1y_1+c_2y_2)\\
    &=&c_1\left(y_1''+p(x)y_1'+q(x)y_1\right) +c_2\left(y_2''+p(x)y_2'+q(x)y_2\right)\\
    &=&c_1\cdot0+c_2\cdot0=0,
    \end{eqnarray*}

    since \(y_1\) and \(y_2\) are solutions of \eqref{eq:2.1.18}.

    We say that \(\{y_1,y_2\}\) is a \( \textcolor{blue}{\mbox{fundamental set of solutions of \(\eqref{eq:2.1.18}\) on}} \) \((a,b)\) if every solution of \eqref{eq:2.1.18} on \((a,b)\) can be written as a linear combination of \(y_1\) and \(y_2\) as in \eqref{eq:2.1.19}. In this case we say that \eqref{eq:2.1.19} is \( \textcolor{blue}{\mbox{general solution of \(\eqref{eq:2.1.18}\) on}} \) \((a,b)\).

    Linear Independence

    We need a way to determine whether a given set \(\{y_1,y_2\}\) of solutions of \eqref{eq:2.1.18} is a fundamental set. The next definition will enable us to state necessary and
    sufficient conditions for this.

    We say that two functions \(y_1\) and \(y_2\) defined on an interval \((a,b)\) are \( \textcolor{blue}{\mbox{linearly independent on}} \) \((a,b)\) if neither is a constant multiple of the other on \((a,b)\). (In particular, this means that neither can be the trivial solution of \eqref{eq:2.1.18}, since, for example, if \(y_1\equiv0\) we could write \(y_1=0y_2\).) We'll also say that the set \(\{y_1,y_2\}\) \( \textcolor{blue}{\mbox{is linearly independent on}} \) \((a,b)\).

    Theorem \(\PageIndex{3}\)

    Suppose \(p\) and \(q\) are continuous on \((a,b).\) Then a set \(\{y_1,y_2\}\) of solutions of

    \begin{equation}\label{eq:2.1.20}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    on \((a,b)\) is a fundamental set if and only if \(\{y_1,y_2\}\) is linearly independent on \((a,b).\)

    Proof

    Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

    We'll present the proof of Theorem \((2.1.3)\) in steps worth regarding as theorems in their own right. However, let's first interpret Theorem \((2.1.3)\) in terms of Examples \((2.1.1)\), \((2.1.2)\), and \((2.1.3)\).

    Example \(\PageIndex{4}\):

    (a) Since \(e^x/e^{-x}=e^{2x}\) is nonconstant, Theorem \((2.1.3)\) implies that \(y=c_1e^x+c_2e^{-x}\) is the general solution of \(y''-y=0\) on \((-\infty,\infty)\).

    (b) Since \(\cos\omega x/\sin\omega x=\cot\omega x\) is nonconstant, Theorem \((2.1.3)\) implies that \(y=c_1\cos\omega x+c_2\sin\omega x\) is the general solution
    of \(y''+\omega^2y=0\) on \((-\infty,\infty)\).

    (c) Since \(x^2/x^{-2}=x^4\) is nonconstant, Theorem \((2.1.3)\) implies that \(y=c_1x^2+c_2/x^2\) is the general solution of \(x^2y''+xy'-4y=0\) on \((-\infty,0)\) and \((0,\infty)\).

    The Wronskian and Abel's Formula

    To motivate a result that we need in order to prove Theorem \((2.1.3)\), let's see what is required to prove that\(\{y_1,y_2\}\) is a fundamental set of solutions of \eqref{eq:2.1.20} on \((a,b)\). Let \(x_0\) be an arbitrary point in \((a,b)\), and suppose \(y\) is an arbitrary solution of \eqref{eq:2.1.20} on \((a,b)\). Then \(y\) is the unique solution of the initial value problem

    \begin{equation}\label{eq:2.1.21}
    y''+p(x)y'+q(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1;
    \end{equation}

    that is, \(k_0\) and \(k_1\) are the numbers obtained by evaluating \(y\) and \(y'\) at \(x_0\). Moreover, \(k_0\) and \(k_1\) can be any real numbers, since Theorem \((2.1.1)\) implies that \eqref{eq:2.1.21} has a solution no matter how \(k_0\) and \(k_1\) are chosen. Therefore \(\{y_1,y_2\}\) is a fundamental set of solutions of \eqref{eq:2.1.20} on \((a,b)\) if and only if it's possible to write the solution of an arbitrary initial value problem \eqref{eq:2.1.21} as \(y=c_1y_1+c_2y_2\). This is equivalent to requiring that the system

    \begin{equation}\label{eq:2.1.22}
    \begin{array}{rcl}
    c_1y_1(x_0)+c_2y_2(x_0)&=&k_0\\
    c_1y_1'(x_0)+c_2y_2'(x_0)&=&k_1
    \end{array}
    \end{equation}

    has a solution \((c_1,c_2)\) for every choice of \((k_0,k_1)\). Let's try to solve \eqref{eq:2.1.22}.

    Multiplying the first equation in \eqref{eq:2.1.22} by \(y_2'(x_0)\) and the second by \(y_2(x_0)\) yields

    \begin{eqnarray*}
    c_1y_1(x_0)y_2'(x_0)+c_2y_2(x_0)y_2'(x_0)&=& y_2'(x_0)k_0\\
    c_1y_1'(x_0)y_2(x_0)+c_2y_2'(x_0)y_2(x_0)&=& y_2(x_0)k_1,
    \end{eqnarray*}

    and subtracting the second equation here from the first yields

    \begin{equation}\label{eq:2.1.23}
    \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_1= y_2'(x_0)k_0-y_2(x_0)k_1.
    \end{equation}

    Multiplying the first equation in \eqref{eq:2.1.22} by \(y_1'(x_0)\) and the second by \(y_1(x_0)\) yields

    \begin{eqnarray*}
    c_1y_1(x_0)y_1'(x_0)+c_2y_2(x_0)y_1'(x_0)&=& y_1'(x_0)k_0\\
    c_1y_1'(x_0)y_1(x_0)+c_2y_2'(x_0)y_1(x_0)&=& y_1(x_0)k_1,
    \end{eqnarray*}

    and subtracting the first equation here from the second yields

    \begin{equation}\label{eq:2.1.24}
    \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_2=y_1(x_0)k_1-y_1'(x_0)k_0.
    \end{equation}

    If

    \begin{eqnarray*}
    y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)=0,
    \end{eqnarray*}

    it's impossible to satisfy \eqref{eq:2.1.23} and \eqref{eq:2.1.24} (and therefore \eqref{eq:2.1.22}) unless \(k_0\) and \(k_1\) happen to satisfy

    \begin{eqnarray*}
    y_1(x_0)k_1-y_1'(x_0)k_0&=&0\\
    y_2'(x_0)k_0-y_2(x_0)k_1&=&0.
    \end{eqnarray*}

    On the other hand, if

    \begin{equation}\label{eq:2.1.25}
    y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\ne0
    \end{equation}

    we can divide \eqref{eq:2.1.23} and \eqref{eq:2.1.24} through by the quantity on the left to obtain

    \begin{equation}\label{eq:2.1.26}
    \begin{array}{rcl}
    c_1&=&\displaystyle{y_2'(x_0)k_0-y_2(x_0)k_1\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)}\\
    c_2&=&\displaystyle{y_1(x_0)k_1-y_1'(x_0)k_0\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)},
    \end{array}
    \end{equation}

    no matter how \(k_0\) and \(k_1\) are chosen. This motivates us to consider conditions on \(y_1\) and \(y_2\) that imply \eqref{eq:2.1.25}.

    Theorem \(\PageIndex{4}\)

    Suppose \(p\) and \(q\) are continuous on \((a,b),\) let \(y_1\) and \(y_2\) be solutions of

    \begin{equation}\label{eq:2.1.27}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    on \((a,b)\), and define

    \begin{equation}\label{eq:2.1.28}
    W=y_1y_2'-y_1'y_2.
    \end{equation}

    Let \(x_0\) be any point in \((a,b).\) Then

    \begin{equation} \label{eq:2.1.29}
    W(x)=W(x_0) e^{-\int^x_{x_0}p(t)\, dt}, \quad a<x<b.
    \end{equation}

    Therefore either \(W\) has no zeros in \((a,b)\) or \(W\equiv0\) on \((a,b).\)

    Proof

    Differentiating \eqref{eq:2.1.28} yields

    \begin{equation}\label{eq:2.1.30}
    W'=y'_1y'_2+y_1y''_2-y'_1y'_2-y''_1y_2=y_1y''_2-y''_1y_2.
    \end{equation}

    Since \(y_1\) and \(y_2\) both satisfy \eqref{eq:2.1.27},

    \begin{eqnarray*}
    y''_1 =-py'_1-qy_1\mbox{ and } y''_2 =-py'_2-qy_2.
    \end{eqnarray*}

    Substituting these into \eqref{eq:2.1.30} yields

    \begin{eqnarray*}
    W'&=& \displaystyle -y_1\bigl(py'_2+qy_2\bigr) +y_2\bigl(py'_1+qy_1\bigr) \\
    &=& \displaystyle -p(y_1y'_2-y_2y'_1)-q(y_1y_2-y_2y_1)\\
    &=& -p(y_1y'_2-y_2y'_1)=-pW.
    \end{eqnarray*}

    Therefore \(W'+p(x)W=0\); that is, \(W\) is the solution of the initial value problem

    \begin{eqnarray*}
    y'+p(x)y=0,\quad y(x_0)=W(x_0).
    \end{eqnarray*}

    We leave it to you to verify by separation of variables that this implies \eqref{eq:2.1.29}. If \(W(x_0)\ne0\), \eqref{eq:2.1.29} implies that \(W\) has no zeros in \((a,b)\), since an exponential is never zero. On the other hand, if \(W(x_0)=0\), \eqref{eq:2.1.29} implies that \(W(x)=0\) for all \(x\) in \((a,b)\).

    The function \(W\) defined in \eqref{eq:2.1.28} is the Wronskian of \(\{y_1,y_2\}\). Formula \eqref{eq:2.1.29} is Abel's formula.

    The Wronskian of \(\{y_1,y_2\}\) is usually written as the determinant

    \begin{eqnarray*}
    W=\left| \begin{array}{cc}
    y_1 & y_2 \\
    y'_1 & y'_2
    \end{array} \right|.
    \end{eqnarray*}

    The expressions in \eqref{eq:2.1.26} for \(c_1\) and \(c_2\) can be written in terms of determinants as

    \begin{eqnarray*}
    c_1={1\over W(x_0)}
    \left| \begin{array}{cc}
    k_0 & y_2(x_0) \\
    k_1 & y'_2(x_0)
    \end{array} \right|
    \mbox{ and }
    c_2={1\over W(x_0)}
    \left| \begin{array}{cc}
    y_1(x_0) & k_0 \\
    y'_1(x_0) &k_1
    \end{array} \right|.
    \end{eqnarray*}

    If you've taken linear algebra you may recognize this as Cramer's rule.

    Example \(\PageIndex{5}\)

    Verify Abel's formula for the following differential equations and the corresponding solutions, from Examples \((2.1.1)\). \((2.1.2)\), and \((2.1.3)\):

    (a) \(y''-y=0;\quad y_1=e^x,\; y_2=e^{-x}\)

    (b) \(y''+\omega^2y=0;\quad \quad y_1=\cos\omega x,\; y_2=\sin\omega x\)

    (c) \(x^2y''+xy'-4y=0;\quad y_1=x^2,\; y_2=1/x^2\)

    Answer

    (a) Since \(p\equiv0\), we can verify Abel's formula by showing that \(W\) is constant, which is true, since

    \begin{eqnarray*}
    W(x)=\left| \begin{array}{rr}
    e^x & e^{-x} \\
    e^x & -e^{-x}
    \end{array} \right|=e^x(-e^{-x})-e^xe^{-x}=-2
    \end{eqnarray*}

    for all \(x\).

    (b) Again, since \(p\equiv0\), we can verify Abel's formula by showing that \(W\) is constant, which is true, since

    \begin{eqnarray*}
    W(x)&=&\displaystyle{\left| \begin{array}{cc}\cos\omega x & \sin\omega x \\
    -\omega\sin\omega x &\omega\cos\omega x
    \end{array} \right|}\\
    &=&\cos\omega x (\omega\cos\omega x)-(-\omega\sin\omega x)\sin\omega x\\ &=&\omega(\cos^2\omega x+\sin^2\omega x)=\omega
    \end{eqnarray*}

    for all \(x\).

    (c) Computing the Wronskian of \(y_1=x^2\) and \(y_2=1/x^2\) directly yields

    \begin{equation}\label{eq:2.1.31}
    W=\left| \begin{array}{cc}
    x^2 & 1/x^2 \\
    2x & -2/x^3
    \end{array} \right|=x^2\left(-{2\over
    x^3}\right)-2x\left(1\over x^2\right)=-{4\over x}.
    \end{equation}

    To verify Abel's formula we rewrite the differential equation as

    \begin{eqnarray*}
    y''+{1\over x}y'-{4\over x^2}y=0
    \end{eqnarray*}

    to see that \(p(x)=1/x\). If \(x_0\) and \(x\) are either both in \((-\infty,0)\) or both in \((0,\infty)\) then

    \begin{eqnarray*}
    \int_{x_0}^x p(t)\,dt=\int_{x_0}^x {dt\over t}=\ln\left(x\over x_0\right),
    \end{eqnarray*}

    so Abel's formula becomes

    \begin{eqnarray*}
    W(x)&=&W(x_0)e^{-\ln(x/x_0)}=W(x_0){x_0\over x}\\
    &=&-\left(4\over x_0\right)\left(x_0\over x\right)\mbox{ from \eqref{eq:2.1.31}}\\
    &=&-{4\over x},
    \end{eqnarray*}

    which is consistent with \eqref{eq:2.1.31}.

    The next theorem will enable us to complete the proof of Theorem \((2.1.3)\).

    Theorem \(\PageIndex{5}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b),\) let \(y_1\) and \(y_2\) be solutions of

    \begin{equation}\label{eq:2.1.32}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    on \((a,b),\) and let \(W=y_1y_2'-y_1'y_2.\) Then \(y_1$\) and \(y_2\) are linearly independent on \((a,b)\) if and only if \(W\) has no zeros on \((a,b).\)

    Proof

    We first show that if \(W(x_0)=0\) for some \(x_0\) in \((a,b)\), then \(y_1\) and \(y_2\) are linearly dependent on \((a,b)\). Let \(I\) be a subinterval of \((a,b)\) on which \(y_1\) has no zeros. (If there's no such subinterval, \(y_1\equiv0\) on \((a,b)\), so \(y_1\) and \(y_2\) are linearly independent, and we're finished with this part of the proof.) Then \(y_2/y_1\) is defined on \(I\), and

    \begin{equation}\label{eq:2.1.33}
    \left(y_2\over y_1\right)'={y_1y_2'-y_1'y_2\over y_1^2}={W\over y_1^2}.
    \end{equation}

    However, if \(W(x_0)=0\), Theorem \((2.1.4)\) implies that \(W\equiv0\) on \((a,b)\). Therefore \eqref{eq:2.1.33} implies that \((y_2/y_1)'\equiv0\), so \(y_2/y_1=c\) (constant) on \(I\). This shows that \(y_2(x)=cy_1(x)\) for all \(x\) in \(I\). However, we want to show that \(y_2=cy_1(x)\) for all \(x\) in \((a,b)\). Let \(Y=y_2-cy_1\). Then \(Y\) is a solution of \eqref{eq:2.1.32} on \((a,b)\) such that \(Y\equiv0\) on \(I\), and therefore \(Y'\equiv0\) on \(I\). Consequently, if \(x_0\) is chosen arbitrarily in \(I\) then \(Y\) is a solution of the initial value problem

    \begin{eqnarray*}
    y''+p(x)y'+q(x)y=0,\quad y(x_0)=0,\quad y'(x_0)=0,
    \end{eqnarray*}

    which implies that \(Y\equiv0\) on \((a,b)\), by the paragraph following Theorem \((2.1.1)\). (See also Exercise \((2.1E.24)\). Hence, \(y_2-cy_1\equiv0\)
    on \((a,b)\), which implies that \(y_1\) and \(y_2\) are not linearly independent on \((a,b)\).

    Now suppose \(W\) has no zeros on \((a,b)\). Then \(y_1\) can't be identically zero on \((a,b)\) (why not?), and therefore there is a subinterval \(I\) of \((a,b)\) on which \(y_1\) has no zeros. Since \eqref{eq:2.1.33} implies that \(y_2/y_1\) is nonconstant on \(I\), \(y_2\) isn't a constant multiple of \(y_1\) on \((a,b)\). A similar argument shows that \(y_1\) isn't a constant multiple of \(y_2\) on \((a,b)\), since

    \begin{eqnarray*}
    \left(y_1\over y_2\right)'={y_1'y_2-y_1y_2'\over y_2^2}=-{W\over y_2^2}
    \end{eqnarray*}

    on any subinterval of \((a,b)\) where \(y_2\) has no zeros.

    We can now complete the proof of Theorem \((2.1.3)\). From Theorem \((2.1.5)\), two solutions \(y_1\) and \(y_2\) of \eqref{eq:2.1.32} are linearly independent on \((a,b)\) if and only if \(W\) has no zeros on \((a,b)\). From Theorem \((2.1.4)\) and the motivating comments preceding it, \(\{y_1,y_2\}\) is a fundamental set of solutions of \eqref{eq:2.1.32} if and only if \(W\) has no zeros on \((a,b)\). Therefore \(\{y_1,y_2\}\) is a fundamental set for \eqref{eq:2.1.32} on \((a,b)\) if and only if \(\{y_1,y_2\}\) is linearly independent on \((a,b)\).

    The next theorem summarizes the relationships among the concepts discussed in this section.

    Theorem \(\PageIndex{6}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b)\) and let \(y_1\) and \(y_2\) be solutions of

    \begin{equation}\label{eq:2.1.34}
    y''+p(x)y'+q(x)y=0
    \end{equation}

    on \((a,b).\) Then the following statements are equivalent\(; \) that is\(,\) they are either all true or all false\(.\)

    (a) The general solution of \(\eqref{eq:2.1.34}\) on \((a,b)\) is \(y=c_1y_1+c_2y_2\).

    (b) \(\{y_1,y_2\}\) is a fundamental set of solutions of \(\eqref{eq:2.1.34}\) on \((a,b).\)

    (c) \(\{y_1,y_2\}\) is linearly independent on \((a,b).\)

    (d) The Wronskian of \(\{y_1,y_2\}\) is nonzero at some point in \((a,b).\)

    (e) The Wronskian of \(\{y_1,y_2\}\) is nonzero at all points in \((a,b).\)

    Proof

    Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

    We can apply this theorem to an equation written as

    \begin{eqnarray*}
    P_0(x)y''+P_1(x)y'+P_2(x)y=0
    \end{eqnarray*}

    on an interval \((a,b)\) where \(P_0\), \(P_1\), and \(P_2\) are continuous and \(P_0\) has no zeros.

    Theorem \(\PageIndex{7}\)

    Suppose \(c\) is in \((a,b)\) and \(\alpha\) and \(\beta\) are real numbers, not both zero. Under the assumptions of Theorem \((2.1.7)\), suppose \(y_{1}\) and \(y_{2}\) are solutions of \eqref{eq:2.1.34} such that

    \begin{equation} \label{eq:2.1.35}
    \alpha y_{1}(c)+\beta y_{1}'(c)=0\text{\; and\; \; }
    \alpha y_{2}(c)+\beta y_{2}'(c)=0.
    \end{equation}

    Then \(\{y_{1},y_{2}\}\) isn't linearly independent on \((a,b).\)

    Proof

    Since \(\alpha\) and \(\beta\) are not both zero, \eqref{eq:2.1.35} implies that

    \begin{eqnarray*}
    \left|\begin{array}{ccccccc}
    y_{1}(c)&y_{1}'(c)\\y_{2}(c)& y_{2}'(c)
    \end{array}\right|=0, \text{\; so\; \; }
    \left|\begin{array}{cccccc}
    y_{1}(c)&y_{2}(c)\\ y_{1}'(c)&y_{2}'(c)
    \end{array}\right|=0
    \end{eqnarray*}

    and Theorem \((2.1.6)\) implies the stated conclusion.


    This page titled 2.1: Linear Second Order Homogeneous Equations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Pamini Thangarajah.

    • Was this article helpful?