Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

5.1: Homogeneous Linear Equations

  • Page ID
    9418
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    A second order differential equation is said to be linear if it can be written as

    \[\label{eq:5.1.1} y''+p(x)y'+q(x)y=f(x).\]

    We call the function \(f\) on the right a forcing function, since in physical applications it is often related to a force acting on some system modeled by the differential equation. We say that Equation \ref{eq:5.1.1} is homogeneous if \(f\equiv0\) or nonhomogeneous if \(f\not\equiv0\). Since these definitions are like the corresponding definitions in Section 2.1 for the linear first order equation

    \[\label{eq:5.1.2} y'+p(x)y=f(x),\]

    it is natural to expect similarities between methods of solving Equation \ref{eq:5.1.1} and Equation \ref{eq:5.1.2} . However, solving Equation \ref{eq:5.1.1} is more difficult than solving Equation \ref{eq:5.1.2} . For example, while Theorem \(\PageIndex{1}\) gives a formula for the general solution of Equation \ref{eq:5.1.2} in the case where \(f\equiv0\) and Theorem \(\PageIndex{2}\) gives a formula for the case where \(f\not\equiv0\), there are no formulas for the general solution of Equation \ref{eq:5.1.1} in either case. Therefore we must be content to solve linear second order equations of special forms.

    In Section 2.1 we considered the homogeneous equation \(y'+p(x)y=0\) first, and then used a nontrivial solution of this equation to find the general solution of the nonhomogeneous equation \(y'+p(x)y=f(x)\). Although the progression from the homogeneous to the nonhomogeneous case isn’t that simple for the linear second order equation, it is still necessary to solve the homogeneous equation

    \[\label{eq:5.1.3} y''+p(x)y'+q(x)y=0\]

    in order to solve the nonhomogeneous equation Equation \ref{eq:5.1.1} . This section is devoted to Equation \ref{eq:5.1.3} .

    The next theorem gives sufficient conditions for existence and uniqueness of solutions of initial value problems for Equation \ref{eq:5.1.3} . We omit the proof.

    Theorem \(\PageIndex{1}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b),\) let \(x_0\) be any point in \((a,b),\) and let \(k_0\) and \(k_1\) be arbitrary real numbers\(.\) Then the initial value problem

    \[y''+p(x)y'+q(x)y=0,\ y(x_0)=k_0,\ y'(x_0)=k_1 \nonumber \]

    has a unique solution on \((a,b).\)

    Since \(y\equiv0\) is obviously a solution of Equation \ref{eq:5.1.3} we call it the trivial solution. Any other solution is nontrivial. Under the assumptions of Theorem \(\PageIndex{1}\) , the only solution of the initial value problem

    \[y''+p(x)y'+q(x)y=0,\ y(x_0)=0,\ y'(x_0)=0 \nonumber \]

    on \((a,b)\) is the trivial solution (Exercise 5.1.24).

    The next three examples illustrate concepts that we’ll develop later in this section. You shouldn’t be concerned with how to find the given solutions of the equations in these examples. This will be explained in later sections.

    Example \(\PageIndex{1}\)

    The coefficients of \(y'\) and \(y\) in

    \[\label{eq:5.1.4} y''-y=0\]

    are the constant functions \(p\equiv0\) and \(q\equiv-1\), which are continuous on \((-\infty,\infty)\). Therefore Theorem \(\PageIndex{1}\) implies that every initial value problem for Equation \ref{eq:5.1.4} has a unique solution on \((-\infty,\infty)\).

    1. Verify that \(y_1=e^x\) and \(y_2=e^{-x}\) are solutions of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).

    2. Verify that if \(c_1\) and \(c_2\) are arbitrary constants, \(y=c_1e^x+c_2e^{-x}\) is a solution of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).

    3. Solve the initial value problem \[\label{eq:5.1.5} y''-y=0,\quad y(0)=1,\quad y'(0)=3.\]

    Solution:

    a. If \(y_1=e^x\) then \(y_1'=e^x\) and \(y_1''=e^x=y_1\), so \(y_1''-y_1=0\). If \(y_2=e^{-x}\), then \(y_2'=-e^{-x}\) and \(y_2''=e^{-x}=y_2\), so \(y_2''-y_2=0\).

    b. If \[\label{eq:5.1.6} y=c_1e^x+c_2e^{-x}\] then \[\label{eq:5.1.7} y'=c_1e^x-c_2e^{-x}\] and \[y''=c_1e^x+c_2e^{-x},\nonumber \]

    so \[\begin{aligned} y''-y&=(c_1e^x+c_2e^{-x})-(c_1e^x+c_2e^{-x})\\ &=c_1(e^x-e^x)+c_2(e^{-x}-e^{-x})=0\end{aligned}\nonumber \] for all \(x\). Therefore \(y=c_1e^x+c_2e^{-x}\) is a solution of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).

    c. 

    We can solve Equation \ref{eq:5.1.5} by choosing \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.6} so that \(y(0)=1\) and \(y'(0)=3\). Setting \(x=0\) in Equation \ref{eq:5.1.6} and Equation \ref{eq:5.1.7} shows that this is equivalent to

    \[\begin{aligned} c_1+c_2&=1\\ c_1-c_2&=3.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=2\) and \(c_2=-1\). Therefore \(y=2e^x-e^{-x}\) is the unique solution of Equation \ref{eq:5.1.5} on \((-\infty,\infty)\).

    Example \(\PageIndex{2}\)

    Let \(\omega\) be a positive constant. The coefficients of \(y'\) and \(y\) in

    \[\label{eq:5.1.8} y''+\omega^2y=0\]

    are the constant functions \(p\equiv0\) and \(q\equiv\omega^2\), which are continuous on \((-\infty,\infty)\). Therefore Theorem \(\PageIndex{1}\) implies that every initial value problem for Equation \ref{eq:5.1.8} has a unique solution on \((-\infty,\infty)\).

    1. Verify that \(y_1=\cos\omega x\) and \(y_2=\sin\omega x\) are solutions of Equation \ref{eq:5.1.8} on \((-\infty,\infty)\).
    2. Verify that if \(c_1\) and \(c_2\) are arbitrary constants then \(y=c_1\cos\omega x+c_2\sin\omega x\) is a solution of Equation \ref{eq:5.1.8} on \((-\infty,\infty)\).
    3. Solve the initial value problem \[\label{eq:5.1.9} y''+\omega^2y=0,\quad y(0)=1,\quad y'(0)=3.\]

    Solution:

    a. If \(y_1=\cos\omega x\) then \(y_1'=-\omega\sin\omega x\) and \(y_1''=-\omega^2\cos\omega x=-\omega^2y_1\), so \(y_1''+\omega^2y_1=0\). If \(y_2=\sin\omega x\) then, \(y_2'=\omega\cos\omega x\) and \(y_2''=-\omega^2\sin\omega x=-\omega^2y_2\), so \(y_2''+\omega^2y_2=0\).

    b. If \[\label{eq:5.1.10} y=c_1\cos\omega x+c_2\sin\omega x\] then \[\label{eq:5.1.11} y'=\omega(-c_1\sin\omega x+c_2\cos\omega x)\] and \[y''=-\omega^2(c_1\cos\omega x+c_2\sin\omega x),\nonumber \] so \[\begin{aligned} y''+\omega^2y&= -\omega^2(c_1\cos\omega x+c_2\sin\omega x) +\omega^2(c_1\cos\omega x+c_2\sin\omega x)\\ &=c_1\omega^2(-\cos\omega x+\cos\omega x)+ c_2\omega^2(-\sin\omega x+\sin\omega x)=0\end{aligned}\nonumber \] for all \(x\). Therefore \(y=c_1\cos\omega x+c_2\sin\omega x\) is a solution of Equation \ref{eq:5.1.8} on \((-\infty,\infty)\).

    c. To solve Equation \ref{eq:5.1.9} , we must choosing \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.10} so that \(y(0)=1\) and \(y'(0)=3\). Setting \(x=0\) in Equation \ref{eq:5.1.10} and Equation \ref{eq:5.1.11} shows that \(c_1=1\) and \(c_2=3/\omega\). Therefore

    \[y=\cos\omega x+{3\over\omega}\sin\omega x\nonumber \]

    is the unique solution of Equation \ref{eq:5.1.9} on \((-\infty,\infty)\).

    Theorem \(\PageIndex{1}\) implies that if \(k_0\) and \(k_1\) are arbitrary real numbers then the initial value problem

    \[\label{eq:5.1.12} P_0(x)y''+P_1(x)y'+P_2(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1\]

    has a unique solution on an interval \((a,b)\) that contains \(x_0\), provided that \(P_0\), \(P_1\), and \(P_2\) are continuous and \(P_0\) has no zeros on \((a,b)\). To see this, we rewrite the differential equation in Equation \ref{eq:5.1.12} as

    \[y''+{P_1(x)\over P_0(x)}y'+{P_2(x)\over P_0(x)}y=0\nonumber \]

    and apply Theorem \(\PageIndex{1}\) with \(p=P_1/P_0\) and \(q=P_2/P_0\).

    Example \(\PageIndex{3}\)

    The equation

    \[\label{eq:5.1.13} x^2y''+xy'-4y=0\]

    has the form of the differential equation in Equation \ref{eq:5.1.12} , with \(P_0(x)=x^2\), \(P_1(x)=x\), and \(P_2(x)=-4\), which are are all continuous on \((-\infty,\infty)\). However, since \(P(0)=0\) we must consider solutions of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\). Since \(P_0\) has no zeros on these intervals, Theorem \(\PageIndex{1}\) implies that the initial value problem

    \[x^2y''+xy'-4y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1\nonumber \]

    has a unique solution on \((0,\infty)\) if \(x_0>0\), or on \((-\infty,0)\) if \(x_0<0\).

    1. Verify that \(y_1=x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,\infty)\) and \(y_2=1/x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\).
    2. Verify that if \(c_1\) and \(c_2\) are any constants then \(y=c_1x^2+c_2/x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\).
    3. Solve the initial value problem \[\label{eq:5.1.14} x^2y''+xy'-4y=0,\quad y(1)=2,\quad y'(1)=0.\]
    4. Solve the initial value problem \[\label{eq:5.1.15} x^2y''+xy'-4y=0,\quad y(-1)=2,\quad y'(-1)=0.\]

    Solution:

    a. If \(y_1=x^2\) then \(y_1'=2x\) and \(y_1''=2\), so \[x^2y_1''+xy_1'-4y_1=x^2(2)+x(2x)-4x^2=0\nonumber \] for \(x\) in \((-\infty,\infty)\). If \(y_2=1/x^2\), then \(y_2'=-2/x^3\) and \(y_2''=6/x^4\), so \[x^2y_2''+xy_2'-4y_2=x^2\left(6\over x^4\right)-x\left(2\over x^3\right)-{4\over x^2}=0\nonumber \] for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    b. If \[\label{eq:5.1.16} y=c_1x^2+{c_2\over x^2}\] then \[\label{eq:5.1.17} y'=2c_1x-{2c_2\over x^3}\] and \[y''=2c_1+{6c_2\over x^4},\nonumber \] so \[\begin{aligned} x^{2}y''+xy'-4y&=x^{2}\left(2c_{1}+\frac{6c_{2}}{x^{4}} \right)+x\left(2c_{1}x-\frac{2c_{2}}{x^{3}} \right)-4\left(c_{1}x^{2}+\frac{c_{2}}{x^{2}} \right) \\ &=c_{1}(2x^{2}+2x^{2}-4x^{2})+c_{2}\left(\frac{6}{x^{2}}-\frac{2}{x^{2}}-\frac{4}{x^{2}} \right) \\ &=c_{1}\cdot 0+c_{2}\cdot 0 = 0 \end{aligned}\nonumber \] for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    c. To solve Equation \ref{eq:5.1.14} , we choose \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.16} so that \(y(1)=2\) and \(y'(1)=0\). Setting \(x=1\) in Equation \ref{eq:5.1.16} and Equation \ref{eq:5.1.17} shows that this is equivalent to

    \[\begin{aligned} \phantom{2}c_1+\phantom{2}c_2&=2\\ 2c_1-2c_2&=0.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of Equation \ref{eq:5.1.14} on \((0,\infty)\).

    d. We can solve Equation \ref{eq:5.1.15} by choosing \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.16} so that \(y(-1)=2\) and \(y'(-1)=0\). Setting \(x=-1\) in Equation \ref{eq:5.1.16} and Equation \ref{eq:5.1.17} shows that this is equivalent to

    \[\begin{aligned} \phantom{-2}c_1+\phantom{2}c_2&=2\\ -2c_1+2c_2&=0.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of Equation \ref{eq:5.1.15} on \((-\infty,0)\).

    Although the formulas for the solutions of Equation \ref{eq:5.1.14} and Equation \ref{eq:5.1.15} are both \(y=x^2+1/x^2\), you should not conclude that these two initial value problems have the same solution. Remember that a solution of an initial value problem is defined on an interval that contains the initial point; therefore, the solution of Equation \ref{eq:5.1.14} is \(y=x^2+1/x^2\) on the interval \((0,\infty)\), which contains the initial point \(x_0=1\), while the solution of Equation \ref{eq:5.1.15} is \(y=x^2+1/x^2\) on the interval \((-\infty,0)\), which contains the initial point \(x_0=-1\).

    The General Solution of a Homogeneous Linear Second Order Equation

    If \(y_1\) and \(y_2\) are defined on an interval \((a,b)\) and \(c_1\) and \(c_2\) are constants, then

    \[y=c_1y_1+c_2y_2\nonumber \]

    is a linear combination of \(y_1\) and \(y_2\). For example, \(y=2\cos x+7 \sin x\) is a linear combination of \(y_1= \cos x\) and \(y_2=\sin x\), with \(c_1=2\) and \(c_2=7\).

    The next theorem states a fact that we’ve already verified in Examples \(\PageIndex{1}\), \(\PageIndex{2}\), \(\PageIndex{3}\).

    Theorem \(\PageIndex{2}\)

    If \(y_1\) and \(y_2\) are solutions of the homogeneous equation

    \[\label{eq:5.1.18} y''+p(x)y'+q(x)y=0\]

    on \((a,b),\) then any linear combination

    \[\label{eq:5.1.19} y=c_1y_1+c_2y_2\]

    of \(y_1\) and \(y_2\) is also a solution of \(\eqref{eq:5.1.18}\) on \((a,b).\)

    Proof

    If \[y=c_1y_1+c_2y_2\nonumber \] then \[y'=c_1y_1'+c_2y_2'\quad\text{ and} \quad y''=c_1y_1''+c_2y_2''.\nonumber \]

    Therefore

    \[\begin{aligned} y''+p(x)y'+q(x)y&=(c_1y_1''+c_2y_2'')+p(x)(c_1y_1'+c_2y_2') +q(x)(c_1y_1+c_2y_2)\\ &=c_1\left(y_1''+p(x)y_1'+q(x)y_1\right) +c_2\left(y_2''+p(x)y_2'+q(x)y_2\right)\\ &=c_1\cdot0+c_2\cdot0=0,\end{aligned}\nonumber \]

    since \(y_1\) and \(y_2\) are solutions of Equation \ref{eq:5.1.18} .

    We say that \(\{y_1,y_2\}\) is a fundamental set of solutions of \(\eqref{eq:5.1.18}\) on \((a,b)\) if every solution of Equation \ref{eq:5.1.18} on \((a,b)\) can be written as a linear combination of \(y_1\) and \(y_2\) as in Equation \ref{eq:5.1.19} . In this case we say that Equation \ref{eq:5.1.19} is general solution of \(\eqref{eq:5.1.18}\) on \((a,b)\).

    Linear Independence

    We need a way to determine whether a given set \(\{y_1,y_2\}\) of solutions of Equation \ref{eq:5.1.18} is a fundamental set. The next definition will enable us to state necessary and sufficient conditions for this.

    We say that two functions \(y_1\) and \(y_2\) defined on an interval \((a,b)\) are linearly independent on \((a,b)\) if neither is a constant multiple of the other on \((a,b)\). (In particular, this means that neither can be the trivial solution of Equation \ref{eq:5.1.18} , since, for example, if \(y_1\equiv0\) we could write \(y_1=0y_2\).) We’ll also say that the set \(\{y_1,y_2\}\) is linearly independent on \((a,b)\).

    Theorem \(\PageIndex{3}\)

    Suppose \(p\) and \(q\) are continuous on \((a,b).\) Then a set \(\{y_1,y_2\}\) of solutions of

    \[\label{eq:5.1.20} y''+p(x)y'+q(x)y=0\]

    on \((a,b)\) is a fundamental set if and only if \(\{y_1,y_2\}\) is linearly independent on \((a,b).\)

    Proof

    We’ll present the proof of Theorem \(\PageIndex{3}\) in steps worth regarding as theorems in their own right. However, let’s first interpret Theorem \(\PageIndex{3}\) in terms of Examples \(\PageIndex{1}\), \(\PageIndex{2}\), \(\PageIndex{3}\).

    Example \(\PageIndex{4}\)

    Since \(e^x/e^{-x}=e^{2x}\) is nonconstant, Theorem \(\PageIndex{3}\) implies that \(y=c_1e^x+c_2e^{-x}\) is the general solution of \(y''-y=0\) on \((-\infty,\infty)\).

    Since \(\cos\omega x/\sin\omega x=\cot\omega x\) is nonconstant, Theorem \(\PageIndex{3}\) implies that \(y=c_1\cos\omega x+c_2\sin\omega x\) is the general solution of \(y''+\omega^2y=0\) on \((-\infty,\infty)\).

    Since \(x^2/x^{-2}=x^4\) is nonconstant, Theorem \(\PageIndex{3}\) implies that \(y=c_1x^2+c_2/x^2\) is the general solution of \(x^2y''+xy'-4y=0\) on \((-\infty,0)\) and \((0,\infty)\).

    The Wronskian and Abel's Formula

    To motivate a result that we need in order to prove Theorem \(\PageIndex{3}\), let’s see what is required to prove that \(\{y_1,y_2\}\) is a fundamental set of solutions of Equation \ref{eq:5.1.20} on \((a,b)\). Let \(x_0\) be an arbitrary point in \((a,b)\), and suppose \(y\) is an arbitrary solution of Equation \ref{eq:5.1.20} on \((a,b)\). Then \(y\) is the unique solution of the initial value problem

    \[\label{eq:5.1.21} y''+p(x)y'+q(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1;\]

    that is, \(k_0\) and \(k_1\) are the numbers obtained by evaluating \(y\) and \(y'\) at \(x_0\). Moreover, \(k_0\) and \(k_1\) can be any real numbers, since Theorem \(\PageIndex{1}\) implies that Equation \ref{eq:5.1.21} has a solution no matter how \(k_0\) and \(k_1\) are chosen. Therefore \(\{y_1,y_2\}\) is a fundamental set of solutions of Equation \ref{eq:5.1.20} on \((a,b)\) if and only if it is possible to write the solution of an arbitrary initial value problem Equation \ref{eq:5.1.21} as \(y=c_1y_1+c_2y_2\). This is equivalent to requiring that the system

    \[\label{eq:5.1.22} \begin{array}{rcl} c_1y_1(x_0)+c_2y_2(x_0)&=k_0\\ c_1y_1'(x_0)+c_2y_2'(x_0)&=k_1 \end{array}\]

    has a solution \((c_1,c_2)\) for every choice of \((k_0,k_1)\). Let’s try to solve Equation \ref{eq:5.1.22} .

    Multiplying the first equation in Equation \ref{eq:5.1.22} by \(y_2'(x_0)\) and the second by \(y_2(x_0)\) yields

    \[\begin{aligned} c_1y_1(x_0)y_2'(x_0)+c_2y_2(x_0)y_2'(x_0)&= y_2'(x_0)k_0\\ c_1y_1'(x_0)y_2(x_0)+c_2y_2'(x_0)y_2(x_0)&= y_2(x_0)k_1,\end{aligned}\]

    and subtracting the second equation here from the first yields

    \[\label{eq:5.1.23} \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_1= y_2'(x_0)k_0-y_2(x_0)k_1.\]

    Multiplying the first equation in Equation \ref{eq:5.1.22} by \(y_1'(x_0)\) and the second by \(y_1(x_0)\) yields

    \[\begin{aligned} c_1y_1(x_0)y_1'(x_0)+c_2y_2(x_0)y_1'(x_0)&= y_1'(x_0)k_0\\ c_1y_1'(x_0)y_1(x_0)+c_2y_2'(x_0)y_1(x_0)&= y_1(x_0)k_1,\end{aligned}\]

    and subtracting the first equation here from the second yields

    \[\label{eq:5.1.24} \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_2= y_1(x_0)k_1-y_1'(x_0)k_0.\]

    If

    \[y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)=0,\nonumber \]

    it is impossible to satisfy Equation \ref{eq:5.1.23} and Equation \ref{eq:5.1.24} (and therefore Equation \ref{eq:5.1.22} ) unless \(k_0\) and \(k_1\) happen to satisfy

    \[\begin{aligned} y_1(x_0)k_1-y_1'(x_0)k_0&=0\\ y_2'(x_0)k_0-y_2(x_0)k_1&=0.\end{aligned}\]

    On the other hand, if

    \[\label{eq:5.1.25} y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\ne0\]

    we can divide Equation \ref{eq:5.1.23} and Equation \ref{eq:5.1.24} through by the quantity on the left to obtain

    \[\label{eq:5.1.26} \begin{array}{rcl} c_1&={y_2'(x_0)k_0-y_2(x_0)k_1\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)}\\  c_2&={y_1(x_0)k_1-y_1'(x_0)k_0\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)}, \end{array}\]

    no matter how \(k_0\) and \(k_1\) are chosen. This motivates us to consider conditions on \(y_1\) and \(y_2\) that imply Equation \ref{eq:5.1.25} .

    Theorem \(\PageIndex{4}\)

    Suppose \(p\) and \(q\) are continuous on \((a,b),\) let \(y_1\) and \(y_2\) be solutions of

    \[\label{eq:5.1.27} y''+p(x)y'+q(x)y=0\]

    on \((a,b)\), and define

    \[\label{eq:5.1.28} W=y_1y_2'-y_1'y_2.\]

    Let \(x_0\) be any point in \((a,b).\) Then

    \[\label{eq:5.1.29} W(x)=W(x_0) e^{-\int^x_{x_0}p(t)\:dt}, \quad a<x<b\]

    Therefore either \(W\) has no zeros in \((a,b)\) or \(W\equiv0\) on \((a,b).\)

    Proof

    Differentiating Equation \ref{eq:5.1.28} yields

    \[\label{eq:5.1.30} W'=y'_1y'_2+y_1y''_2-y'_1y'_2-y''_1y_2= y_1y''_2-y''_1y_2.\]

    Since \(y_1\) and \(y_2\) both satisfy Equation \ref{eq:5.1.27} ,

    \[y''_1 =-py'_1-qy_1\quad \text{and} \quad y''_2 =-py'_2-qy_2.\nonumber \]

    Substituting these into Equation \ref{eq:5.1.30} yields

    \[\begin{aligned} W'&= -y_1\bigl(py'_2+qy_2\bigr) +y_2\bigl(py'_1+qy_1\bigr) \\ &= -p(y_1y'_2-y_2y'_1)-q(y_1y_2-y_2y_1)\\ &= -p(y_1y'_2-y_2y'_1)=-pW.\end{aligned}\nonumber \]

    Therefore \(W'+p(x)W=0\); that is, \(W\) is the solution of the initial value problem

    \[y'+p(x)y=0,\quad y(x_0)=W(x_0).\nonumber \]

    We leave it to you to verify by separation of variables that this implies Equation \ref{eq:5.1.29} . If \(W(x_0)\ne0\), Equation \ref{eq:5.1.29} implies that \(W\) has no zeros in \((a,b)\), since an exponential is never zero. On the other hand, if \(W(x_0)=0\), Equation \ref{eq:5.1.29} implies that \(W(x)=0\) for all \(x\) in \((a,b)\).

    The function \(W\) defined in Equation \ref{eq:5.1.28} is the Wronskian of \(\{y_1,y_2\}\). Formula Equation \ref{eq:5.1.29} is Abel’s formula.

    The Wronskian of \(\{y_1,y_2\}\) is usually written as the determinant

    \[W=\left| \begin{array}{cc} y_1 & y_2 \\ y'_1 & y'_2 \end{array} \right|.\nonumber \]

    The expressions in Equation \ref{eq:5.1.26} for \(c_1\) and \(c_2\) can be written in terms of determinants as

    \[c_1={1\over W(x_0)} \left| \begin{array}{cc} k_0 & y_2(x_0) \\ k_1 & y'_2(x_0) \end{array} \right| \quad \text{and} \quad c_2={1\over W(x_0)} \left| \begin{array}{cc} y_1(x_0) & k_0 \\ y'_1(x_0) &k_1 \end{array} \right|.\nonumber \]

    If you’ve taken linear algebra you may recognize this as Cramer’s rule.

    Example \(\PageIndex{5}\)

    Verify Abel’s formula for the following differential equations and the corresponding solutions, from Examples \(\PageIndex{1}\), \(\PageIndex{2}\), \(\PageIndex{3}\).

    1. \(y''-y=0;\quad y_1=e^x,\; y_2=e^{-x}\)

    2. \(y''+\omega^2y=0;\quad y_1=\cos\omega x,\; y_2=\sin\omega x\)

    3. \(x^2y''+xy'-4y=0;\quad y_1=x^2,\; y_2=1/x^2\)

    Solution:

    a. Since \(p\equiv0\), we can verify Abel’s formula by showing that \(W\) is constant, which is true, since

    \[W(x)=\left| \begin{array}{rr} e^x & e^{-x} \\ e^x & -e^{-x} \end{array} \right|=e^x(-e^{-x})-e^xe^{-x}=-2\nonumber \]

    for all \(x\).

    b. Again, since \(p\equiv0\), we can verify Abel’s formula by showing that \(W\) is constant, which is true, since

    \[\begin{aligned} W(x)&={\left| \begin{array}{cc} \cos\omega x & \sin\omega x \\ -\omega\sin\omega x &\omega\cos\omega x \end{array} \right|}\\ &=\cos\omega x (\omega\cos\omega x)-(-\omega\sin\omega x)\sin\omega x\\ &=\omega(\cos^2\omega x+\sin^2\omega x)=\omega\end{aligned}\nonumber \]

    for all \(x\).

    c. Computing the Wronskian of \(y_1=x^2\) and \(y_2=1/x^2\) directly yields

    \[\label{eq:5.1.31} W=\left| \begin{array}{cc} x^2 & 1/x^2 \\ 2x & -2/x^3 \end{array} \right|=x^2\left(-{2\over x^3}\right)-2x\left(1\over x^2\right)=-{4\over x}.\]

    To verify Abel’s formula we rewrite the differential equation as

    \[y''+{1\over x}y'-{4\over x^2}y=0\nonumber \]

    to see that \(p(x)=1/x\). If \(x_0\) and \(x\) are either both in \((-\infty,0)\) or both in \((0,\infty)\) then

    \[\int_{x_0}^x p(t)\,dt=\int_{x_0}^x {dt\over t}=\ln\left(x\over x_0\right),\nonumber \]

    so Abel’s formula becomes

    \[\begin{aligned} W(x)&=W(x_0)e^{-\ln(x/x_0)}=W(x_0){x_0\over x}\\ &=-\left(4\over x_0\right)\left(x_0\over x\right)\quad \text{from} \eqref{eq:5.1.31}\\ &=-{4\over x},\end{aligned}\nonumber \]

    which is consistent with Equation \ref{eq:5.1.31} .

    The next theorem will enable us to complete the proof of Theorem \(\PageIndex{3}\).

    Theorem \(\PageIndex{5}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b),\) let \(y_1\) and \(y_2\) be solutions of

    \[\label{eq:5.1.32} y''+p(x)y'+q(x)y=0\]

    on \((a,b),\) and let \(W=y_1y_2'-y_1'y_2.\) Then \(y_1\) and \(y_2\) are linearly independent on \((a,b)\) if and only if \(W\) has no zeros on \((a,b).\)

    Proof

    We first show that if \(W(x_0)=0\) for some \(x_0\) in \((a,b)\), then \(y_1\) and \(y_2\) are linearly dependent on \((a,b)\). Let \(I\) be a subinterval of \((a,b)\) on which \(y_1\) has no zeros. (If there’s no such subinterval, \(y_1\equiv0\) on \((a,b)\), so \(y_1\) and \(y_2\) are linearly independent, and we are finished with this part of the proof.) Then \(y_2/y_1\) is defined on \(I\), and

    \[\label{eq:5.1.33} \left(y_2\over y_1\right)'={y_1y_2'-y_1'y_2\over y_1^2}={W\over y_1^2}.\]

    However, if \(W(x_0)=0\), Theorem \(\PageIndex{4}\) implies that \(W\equiv0\) on \((a,b)\). Therefore Equation \ref{eq:5.1.33} implies that \((y_2/y_1)'\equiv0\), so \(y_2/y_1=c\) (constant) on \(I\). This shows that \(y_2(x)=cy_1(x)\) for all \(x\) in \(I\). However, we want to show that \(y_2=cy_1(x)\) for all \(x\) in \((a,b)\). Let \(Y=y_2-cy_1\). Then \(Y\) is a solution of Equation \ref{eq:5.1.32} on \((a,b)\) such that \(Y\equiv0\) on \(I\), and therefore \(Y'\equiv0\) on \(I\). Consequently, if \(x_0\) is chosen arbitrarily in \(I\) then \(Y\) is a solution of the initial value problem

    \[y''+p(x)y'+q(x)y=0,\quad y(x_0)=0,\quad y'(x_0)=0,\nonumber \]

    which implies that \(Y\equiv0\) on \((a,b)\), by the paragraph following Theorem \(\PageIndex{1}\) . (See also Exercise 5.1.24). Hence, \(y_2-cy_1\equiv0\) on \((a,b)\), which implies that \(y_1\) and \(y_2\) are not linearly independent on \((a,b)\).

    Now suppose \(W\) has no zeros on \((a,b)\). Then \(y_1\) can’t be identically zero on \((a,b)\) (why not?), and therefore there is a subinterval \(I\) of \((a,b)\) on which \(y_1\) has no zeros. Since Equation \ref{eq:5.1.33} implies that \(y_2/y_1\) is nonconstant on \(I\), \(y_2\) isn’t a constant multiple of \(y_1\) on \((a,b)\). A similar argument shows that \(y_1\) isn’t a constant multiple of \(y_2\) on \((a,b)\), since

    \[\left(y_1\over y_2\right)'={y_1'y_2-y_1y_2'\over y_2^2}=-{W\over y_2^2}\nonumber \]

    on any subinterval of \((a,b)\) where \(y_2\) has no zeros.

    We can now complete the proof of Theorem \(\PageIndex{3}\). From Theorem \(\PageIndex{5}\), two solutions \(y_1\) and \(y_2\) of Equation \ref{eq:5.1.32} are linearly independent on \((a,b)\) if and only if \(W\) has no zeros on \((a,b)\). From Theorem \(\PageIndex{4}\) and the motivating comments preceding it, \(\{y_1,y_2\}\) is a fundamental set of solutions of Equation \ref{eq:5.1.32} if and only if \(W\) has no zeros on \((a,b)\). Therefore \(\{y_1,y_2\}\) is a fundamental set for Equation \ref{eq:5.1.32} on \((a,b)\) if and only if \(\{y_1,y_2\}\) is linearly independent on \((a,b)\).

    The next theorem summarizes the relationships among the concepts discussed in this section.

    Theorem \(\PageIndex{6}\)

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b)\) and let \(y_1\) and \(y_2\) be solutions of

    \[\label{eq:5.1.34} y''+p(x)y'+q(x)y=0\]

    on \((a,b).\) Then the following statements are equivalent\(;\) that is\(,\) they are either all true or all false\(.\)
    1. The general solution of \(\eqref{eq:5.1.34}\) on \((a,b)\) is \(y=c_1y_1+c_2y_2\).
    2. \(\{y_1,y_2\}\) is a fundamental set of solutions of \(\eqref{eq:5.1.34}\) on \((a,b).\)
    3. \(\{y_1,y_2\}\) is linearly independent on \((a,b).\)
    4. The Wronskian of \(\{y_1,y_2\}\) is nonzero at some point in \((a,b).\)
    5. The Wronskian of \(\{y_1,y_2\}\) is nonzero at all points in \((a,b).\)

    We can apply this theorem to an equation written as

    \[P_0(x)y''+P_1(x)y'+P_2(x)y=0\nonumber \]

    on an interval \((a,b)\) where \(P_0\), \(P_1\), and \(P_2\) are continuous and \(P_0\) has no zeros.dd proof here and it will automatically be hidden 

    Theorem \(\PageIndex{7}\)

    Suppose \(c\) is in \((a,b)\) and \(\alpha\) and \(\beta\) are real numbers, not both zero. Under the assumptions of Theorem \(\PageIndex{7}\), suppose \(y_{1}\) and \(y_{2}\) are solutions of Equation \ref{eq:5.1.34} such that

    \[\label{eq:5.1.35} \alpha y_{1}(c)+\beta y_{1}'(c)=0\quad\text{and}\quad \alpha y_{2}(c)+\beta y_{2}'(c)=0.\]

    Then \(\{y_{1},y_{2}\}\) isn’t linearly independent on \((a,b).\)

    Proof

    Since \(\alpha\) and \(\beta\) are not both zero, Equation \ref{eq:5.1.35} implies that

    \[\left|\begin{array}{ccccccc} y_{1}(c)&y_{1}'(c)\\y_{2}(c)& y_{2}'(c) \end{array}\right|=0, \quad\text{so}\quad \left|\begin{array}{cccccc} y_{1}(c)&y_{2}(c)\\ y_{1}'(c)&y_{2}'(c) \end{array}\right|=0\nonumber \]

    and Theorem \(\PageIndex{6}\) implies the stated conclusion.