Skip to main content
Mathematics LibreTexts

5.1: Homogeneous Linear Equations

  • Page ID
    103499
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Caution

    If you have not had Math 410 (linear algebra), then you will need to read Appendix 11.3 before starting Chapter 5.

    Definition 5.1.1

    A second order differential equation is said to be linear if it can be written as

    \[\label{eq:5.1.1} y''+p(x)y'+q(x)y=f(x).\]

    We call the function \(f\) on the right a forcing function, since in physical applications it is often related to a force acting on some system modeled by the differential equation. We say that Equation \ref{eq:5.1.1} is homogeneous if \(f\equiv0\) or nonhomogeneous if \(f\not\equiv0\).

    Note: This use of homogeneous is different from how we used it in section 2.4, but should not cause confusion.

    Since these definitions are like the corresponding definitions in Section 2.3 for the linear first order equation

    \[\label{eq:5.1.2} y'+p(x)y=f(x),\]

    it is natural to expect similarities between methods of solving Equation \ref{eq:5.1.1} and Equation \ref{eq:5.1.2}. However, solving Equation \ref{eq:5.1.1} is more difficult than solving Equation \ref{eq:5.1.2}. In this case it is necessary to solve the homogeneous equation

    \[\label{eq:5.1.3} y''+p(x)y'+q(x)y=0\]

    before we solve the nonhomogeneous Equation \ref{eq:5.1.1}, and this section is devoted to studying (not solving) Equation \ref{eq:5.1.3}.

    The next theorem gives sufficient conditions for existence and uniqueness of solutions of initial value problems for Equation \ref{eq:5.1.3}. We omit the proof.

    Theorem 5.1.1

    Suppose \(p\) and \(q\) are continuous on an open interval \((a,b),\) let \(x_0\) be any point in \((a,b),\) and let \(k_0\) and \(k_1\) be arbitrary real numbers\(.\) Then the initial value problem

    \[y''+p(x)y'+q(x)y=0,\ y(x_0)=k_0,\ y'(x_0)=k_1 \nonumber \]

    has a unique solution on \((a,b).\)

    Since \(y\equiv0\) is obviously a solution of Equation \ref{eq:5.1.3} we call it the trivial solution. Any other solution is nontrivial. Under the assumptions of Theorem 5.1.1 , the only solution of the initial value problem

    \[y''+p(x)y'+q(x)y=0,\ y(x_0)=0,\ y'(x_0)=0 \nonumber \]

    on \((a,b)\) is the trivial solution.

    The next three examples illustrate concepts that we’ll develop later in this section. You shouldn’t be concerned with how to find the given solutions of the equations in these examples. This will be explained in later sections.

    Example \(\PageIndex{1}\)

    The coefficients of \(y'\) and \(y\) in

    \[\label{eq:5.1.4} y''-y=0\]

    are the constant functions \(p\equiv0\) and \(q\equiv-1\), which are continuous on \((-\infty,\infty)\). Therefore Theorem 5.1.1 implies that every initial value problem for Equation \ref{eq:5.1.4} has a unique solution on \((-\infty,\infty)\).

    1. Verify that \(y_1=e^x\) and \(y_2=e^{-x}\) are solutions of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).
    2. Verify that if \(c_1\) and \(c_2\) are arbitrary constants, \(y=c_1e^x+c_2e^{-x}\) is a solution of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).
    3. Solve the initial value problem \[\label{eq:5.1.5} y''-y=0,\quad y(0)=1,\quad y'(0)=3.\]

    Solution a

    If \(y_1=e^x\) then \(y_1'=e^x\) and \(y_1''=e^x\), so \[y_1''-y_1=e^x-e^x=0.\nonumber\]

    If \(y_2=e^{-x}\), then \(y_2'=-e^{-x}\) and \(y_2''=e^{-x}\), so \[y_2''-y_2=e^{-x}-e^{-x}=0.\nonumber\]

    Solution b

    If \[\label{eq:5.1.6} y=c_1e^x+c_2e^{-x}\] then \[\label{eq:5.1.7} y'=c_1e^x-c_2e^{-x}\] and \[y''=c_1e^x+c_2e^{-x},\nonumber \]

    so \[\begin{aligned} y''-y&=(c_1e^x+c_2e^{-x})-(c_1e^x+c_2e^{-x})\\ &=c_1(e^x-e^x)+c_2(e^{-x}-e^{-x})=0\end{aligned}\nonumber \] for all \(x\). Therefore \(y=c_1e^x+c_2e^{-x}\) is a solution of Equation \ref{eq:5.1.4} on \((-\infty,\infty)\).

    Solution c

    We can solve Equation \ref{eq:5.1.5} by choosing \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.6} so that \(y(0)=1\) and \(y'(0)=3\). Setting \(x=0\) in Equation \ref{eq:5.1.6} and Equation \ref{eq:5.1.7} shows that this is equivalent to

    \[\begin{aligned} c_1+c_2&=1\\ c_1-c_2&=3.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=2\) and \(c_2=-1\). Therefore \(y=2e^x-e^{-x}\) is the unique solution of Equation \ref{eq:5.1.5} on \((-\infty,\infty)\).

    Theorem 5.1.1 implies that if \(k_0\) and \(k_1\) are arbitrary real numbers then the initial value problem

    \[\label{eq:5.1.12} P_0(x)y''+P_1(x)y'+P_2(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1\]

    has a unique solution on an interval \((a,b)\) that contains \(x_0\), provided that \(P_0\), \(P_1\), and \(P_2\) are continuous and \(P_0\) has no zeros on \((a,b)\). To see this, we rewrite the differential equation in Equation \ref{eq:5.1.12} as

    \[y''+{P_1(x)\over P_0(x)}y'+{P_2(x)\over P_0(x)}y=0\nonumber \]

    and apply Theorem 5.1.1 with \(p=P_1/P_0\) and \(q=P_2/P_0\).

    Example \(\PageIndex{2}\)

    The equation

    \[\label{eq:5.1.13} x^2y''+xy'-4y=0\]

    has the form of the differential equation in Equation \ref{eq:5.1.12}, with \(P_0(x)=x^2\), \(P_1(x)=x\), and \(P_2(x)=-4\), which are are all continuous on \((-\infty,\infty)\). However, since \(P_0(0)=0\) we must consider solutions of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\). Since \(P_0\) has no zeros on these intervals, Theorem 5.1.1 implies that the initial value problem

    \[x^2y''+xy'-4y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1\nonumber \]

    has a unique solution on \((0,\infty)\) if \(x_0>0\), or on \((-\infty,0)\) if \(x_0<0\).

    1. Verify that \(y_1=x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,\infty)\) and \(y_2=1/x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\).
    2. Verify that if \(c_1\) and \(c_2\) are any constants then \(y=c_1x^2+c_2/x^2\) is a solution of Equation \ref{eq:5.1.13} on \((-\infty,0)\) and \((0,\infty)\).
    3. Solve the initial value problem \[\label{eq:5.1.14} x^2y''+xy'-4y=0,\quad y(1)=2,\quad y'(1)=0.\]
    4. Solve the initial value problem \[\label{eq:5.1.15} x^2y''+xy'-4y=0,\quad y(-1)=2,\quad y'(-1)=0.\]

    Solution a

    If \(y_1=x^2\) then \(y_1'=2x\) and \(y_1''=2\), so \[x^2y_1''+xy_1'-4y_1=x^2(2)+x(2x)-4x^2=0\nonumber \] for \(x\) in \((-\infty,\infty)\). If \(y_2=1/x^2\), then \(y_2'=-2/x^3\) and \(y_2''=6/x^4\), so \[x^2y_2''+xy_2'-4y_2=x^2\left(6\over x^4\right)-x\left(2\over x^3\right)-{4\over x^2}=0\nonumber \] for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    Solution b

    If \[\label{eq:5.1.16} y=c_1x^2+{c_2\over x^2}\] then \[\label{eq:5.1.17} y'=2c_1x-{2c_2\over x^3}\] and \[y''=2c_1+{6c_2\over x^4},\nonumber \] so \[\begin{aligned} x^{2}y''+xy'-4y&=x^{2}\left(2c_{1}+\frac{6c_{2}}{x^{4}} \right)+x\left(2c_{1}x-\frac{2c_{2}}{x^{3}} \right)-4\left(c_{1}x^{2}+\frac{c_{2}}{x^{2}} \right) \\ &=c_{1}(2x^{2}+2x^{2}-4x^{2})+c_{2}\left(\frac{6}{x^{2}}-\frac{2}{x^{2}}-\frac{4}{x^{2}} \right) \\ &=c_{1}\cdot 0+c_{2}\cdot 0 = 0 \end{aligned}\nonumber \] for \(x\) in \((-\infty,0)\) or \((0,\infty)\).

    Solution c

    To solve Equation \ref{eq:5.1.14}, we choose \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.16} so that \(y(1)=2\) and \(y'(1)=0\). Setting \(x=1\) in Equation \ref{eq:5.1.16} and Equation \ref{eq:5.1.17} shows that this is equivalent to

    \[\begin{aligned} \phantom{2}c_1+\phantom{2}c_2&=2\\ 2c_1-2c_2&=0.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of Equation \ref{eq:5.1.14} on \((0,\infty)\).

    Solution d

    We can solve Equation \ref{eq:5.1.15} by choosing \(c_1\) and \(c_2\) in Equation \ref{eq:5.1.16} so that \(y(-1)=2\) and \(y'(-1)=0\). Setting \(x=-1\) in Equation \ref{eq:5.1.16} and Equation \ref{eq:5.1.17} shows that this is equivalent to

    \[\begin{aligned} \phantom{-2}c_1+\phantom{2}c_2&=2\\ -2c_1+2c_2&=0.\end{aligned}\nonumber \]

    Solving these equations yields \(c_1=1\) and \(c_2=1\). Therefore \(y=x^2+1/x^2\) is the unique solution of Equation \ref{eq:5.1.15} on \((-\infty,0)\).

    Caution

    Although the formulas for the solutions of Equation \ref{eq:5.1.14} and Equation \ref{eq:5.1.15} are both \(y=x^2+1/x^2\), you should not conclude that these two initial value problems have the same solution. Remember that a solution of an initial value problem is defined on an interval that contains the initial point; therefore, the solution of Equation \ref{eq:5.1.14} is \(y=x^2+1/x^2\) on the interval \((0,\infty)\), which contains the initial point \(x_0=1\), while the solution of Equation \ref{eq:5.1.15} is \(y=x^2+1/x^2\) on the interval \((-\infty,0)\), which contains the initial point \(x_0=-1\).

    Definition 5.1.2

    If \(y_1\) and \(y_2\) are defined on an interval \((a,b)\) and \(c_1\) and \(c_2\) are constants, then we call

    \[y=c_1y_1+c_2y_2\nonumber \]

    a linear combination of \(y_1\) and \(y_2\).

    Example \(\PageIndex{3}\)

    \(y=c_1e^x+c_2e^{-x}\) is a linear combination of \(y_1=e^x\) and \(y_2=e^{-x}\).

    \(y=c_1x^2+c_2x^{-2}\) is a linear combination of \(y_1=x^2\) and \(y_2=x^{-2}\).

    The next theorem states a fact that we’ve seen verified in Examples 5.1.1 and 5.1.2 .

    Theorem 5.1.2

    If \(y_1\) and \(y_2\) are solutions of the homogeneous equation

    \[\label{eq:5.1.18} y''+p(x)y'+q(x)y=0\]

    on \((a,b),\) then any linear combination

    \[\label{eq:5.1.19} y=c_1y_1+c_2y_2\]

    of \(y_1\) and \(y_2\) is also a solution of \(\eqref{eq:5.1.18}\) on \((a,b).\)

    Proof

    If \[y=c_1y_1+c_2y_2\nonumber \] then \[y'=c_1y_1'+c_2y_2'\quad\text{ and} \quad y''=c_1y_1''+c_2y_2''.\nonumber \]

    Therefore

    \[\begin{aligned} y''+p(x)y'+q(x)y&=(c_1y_1''+c_2y_2'')+p(x)(c_1y_1'+c_2y_2') +q(x)(c_1y_1+c_2y_2)\\ &=c_1\left(y_1''+p(x)y_1'+q(x)y_1\right) +c_2\left(y_2''+p(x)y_2'+q(x)y_2\right)\\ &=c_1\cdot0+c_2\cdot0=0,\end{aligned}\nonumber \]

    since \(y_1\) and \(y_2\) are solutions of Equation \ref{eq:5.1.18}.

    The General Solution of a Homogeneous Linear Second Order Equation

    You should note that both of Example 5.1.1 and Example 5.1.2 are second order homogeneous differential equations and each had two solutions; this is not a coincidence and we will see why this is true later in this chapter. For now though, we need to discuss how these solutions are "independent" from one another.

    Definition 5.1.3

    We say that two functions \(y_1\) and \(y_2\) defined on an interval \((a,b)\) are linearly independent on \((a,b)\) if neither is a constant multiple of the other on \((a,b)\). In particular, this means that neither can be the trivial solution of Equation \ref{eq:5.1.18}, since, for example, if \(y_1\equiv0\) we could write \(y_1=0y_2\). We’ll also say that the set \(\{y_1,y_2\}\) is linearly independent on \((a,b)\).

    Example \(\PageIndex{4}\)

    Since \(e^x/e^{-x}=e^{2x}\) is nonconstant, \(y_1=e^x\) and \(y_2=e^{-x}\) from Example 5.1.1 are linearly independent on \((-\infty,\infty)\).

    Since \(x^2/x^{-2}=x^4\) is nonconstant, \(y_1=x^2\) and \(y_2=x^{-2}\) from Example 5.1.2 are linearly independent on \((-\infty,0)\) and \((0,\infty)\).

    Definition 5.1.4

    We say that any set of two linearly independent solutions \(\{y_1,y_2\}\) of \(\eqref{eq:5.1.18}\) on \((a,b)\) is a fundamental set of solutions of Equation \ref{eq:5.1.18} on \((a,b)\).

    Example \(\PageIndex{5}\)

    {\(y_1=e^x, y_2=e^{-x}\)} is a fundamental set of solutions of the differential equation in Example 5.1.1

    {\(y_1=x^2, y_2=x^{-2}\)} is a fundamental set of solutions of the differential equation in Example 5.1.2

    Definition 5.1.5

    The set of all linear combinations of a fundamental set of solutions \(y_1\) and \(y_2\) of Equation \ref{eq:5.1.19} is called the general solution of \(\eqref{eq:5.1.18}\) on \((a,b)\).

    Example \(\PageIndex{6}\)

    \(y=c_1e^x+c_2e^{-x}\) is the general solution of the differential equation in Example 5.1.1.

    \(y=c_1x^2+c_2x^{-2}\) is the general solution of the differential equation in Example 5.1.2.

    Let's take a moment to discuss why a fundamental set of solutions is so important - we'll also develop an important idea we will use later in the class.

    Suppose \(p\) and \(q\) are continuous on \((a,b)\) and \(\{y_1,y_2\}\) is a fundamental set of solutions of the homogeneous equation

    \[\label{eq:5.1.20} y''+p(x)y'+q(x)y=0\]

    on \((a,b)\).

    Let \(x_0\) be an arbitrary point in \((a,b)\), and suppose a linear combination of \(\{y_1,y_2\}\), \(y=c_1y_1+c_2y_2\), is an arbitrary solution of Equation \ref{eq:5.1.20} on \((a,b)\). Then, solving for \(c_1\) and \(c_2\) will give us the unique solution of the initial value problem

    \[\label{eq:5.1.21} y''+p(x)y'+q(x)y=0,\quad y(x_0)=k_0,\quad y'(x_0)=k_1;\]

    that is, \(k_0\) and \(k_1\) are the numbers obtained by evaluating \(y\) and \(y'\) at \(x_0\). Moreover, \(k_0\) and \(k_1\) can be any real numbers, since Theorem 5.1.1 implies that Equation \ref{eq:5.1.21} has a solution no matter how \(k_0\) and \(k_1\) are chosen. Therefore \(\{y_1,y_2\}\) is a fundamental set of solutions of Equation \ref{eq:5.1.20} on \((a,b)\) if and only if it is possible to write any solution of the initial value problem Equation \ref{eq:5.1.21} as \(y=c_1y_1+c_2y_2\). This is equivalent to requiring that the system

    \[\label{eq:5.1.22} \begin{array}{rcl} c_1y_1(x_0)+c_2y_2(x_0)=k_0\\ c_1y_1'(x_0)+c_2y_2'(x_0)=k_1 \end{array}\]

    has a solution \((c_1,c_2)\) for every choice of \((k_0,k_1)\). Let’s try to solve Equation \ref{eq:5.1.22}.

    Multiplying the first equation in Equation \ref{eq:5.1.22} by \(y_2'(x_0)\) and the second by \(y_2(x_0)\) yields

    \[\begin{aligned} c_1y_1(x_0)y_2'(x_0)+c_2y_2(x_0)y_2'(x_0)&= y_2'(x_0)k_0\\ c_1y_1'(x_0)y_2(x_0)+c_2y_2'(x_0)y_2(x_0)&= y_2(x_0)k_1,\end{aligned}\]

    and subtracting the second equation here from the first yields

    \[\label{eq:5.1.23} \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_1= y_2'(x_0)k_0-y_2(x_0)k_1.\]

    Multiplying the first equation in Equation \ref{eq:5.1.22} by \(y_1'(x_0)\) and the second by \(y_1(x_0)\) yields

    \[\begin{aligned} c_1y_1(x_0)y_1'(x_0)+c_2y_2(x_0)y_1'(x_0)&= y_1'(x_0)k_0\\ c_1y_1'(x_0)y_1(x_0)+c_2y_2'(x_0)y_1(x_0)&= y_1(x_0)k_1,\end{aligned}\]

    and subtracting the first equation here from the second yields

    \[\label{eq:5.1.24} \left(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\right)c_2= y_1(x_0)k_1-y_1'(x_0)k_0.\]

    If

    \[\label{eq:5.1.25} y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\ne0\]

    we can divide Equation \ref{eq:5.1.23} and Equation \ref{eq:5.1.24} through by the quantity on the left to obtain

    \[\label{eq:5.1.26} \begin{array}{rcl} c_1={y_2'(x_0)k_0-y_2(x_0)k_1\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)}\\ c_2={y_1(x_0)k_1-y_1'(x_0)k_0\over y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)}, \end{array}\]

    no matter how \(k_0\) and \(k_1\) are chosen. This motivates us to consider conditions on \(y_1\) and \(y_2\) that imply Equation \ref{eq:5.1.25}.

    This is only possible if \(y_1(x_0)y_2'(x_0)-y_1'(x_0)y_2(x_0)\ne0\) and this is only the case when \(y_1\) and \(y_2\) are linearly independent.

    The Wronskian

    The expression \(y_1y_2'-y_1'y_2\) is very important in this class and we define it formally here.

    Definition 5.1.6

    The function \(W=y_1y_2'-y_1'y_2\) is called the Wronskian of \(\{y_1,y_2\}\).

    The Wronskian is a determinant and is written

    \[W=\left| \begin{array}{cc} y_1 & y_2 \\ y'_1 & y'_2 \end{array} \right|.\nonumber \]

    Theorem 5.1.3

    Suppose \(p\) and \(q\) are continuous on \((a,b),\) let \(y_1\) and \(y_2\) be solutions of

    \[\label{eq:5.1.27} y''+p(x)y'+q(x)y=0\]

    on \((a,b)\).

    Then

    a. \(y_1\) and \(y_2\) are a set of linearly independent solutions of \ref{eq:5.1.27} if and only if W\(\ne0\) on \((a,b).\)

    b. \(y_1\) and \(y_2\) are a set of linearly dependent solutions \ref{eq:5.1.27} if and only if W\(\equiv0\) on \((a,b).\)

    So, \(y_1\) and \(y_2\) are a fundamental set of solutions of \ref{eq:5.1.27} if and only if W\(\ne0\) on \((a,b)\), and \(y=c_1y_1+c_2y_2\) is therefore the general solution of \ref{eq:5.1.27}.

    Example \(\PageIndex{7}\)

    Use Theorem 5.1.3 to verify that the solutions from Examples 5.1.1 and 5.1.2 form a fundamental set of solutions to the given differential equations.

    1. \(y''-y=0;\quad y_1=e^x,\; y_2=e^{-x}\)
    2. \(x^2y''+xy'-4y=0;\quad y_1=x^2,\; y_2=1/x^2\)

    Solution a

    \[W(x)=\left| \begin{array}{rr} e^x & e^{-x} \\ e^x & -e^{-x} \end{array} \right|=e^x(-e^{-x})-e^xe^{-x}=-2\not\equiv0\nonumber \]

    for all \(x\).

    Solution b

    \[W(x)=\left| \begin{array}{cc} x^2 & 1/x^2 \\ 2x & -2/x^3 \end{array} \right|=x^2\left(-{2\over x^3}\right)-2x\left(1\over x^2\right)=-{4\over x}\not\equiv0.\]

    for all \(x\) on \((-\infty,0)\) or \((0,\infty)\).

    Example \(\PageIndex{8}\)

    Use the Wronskian to show \(y_1=x^3\) and \(y_2=5x^3\) are linearly dependent.

    Solution

    \[W(x)=\left| \begin{array}{cc} x^3 & 5x^3 \\ 3x^2 & 15x^2 \end{array} \right|=x^3\left(15x^2\right)-3x^2\left(5x^3\right)\equiv0.\] for all \(x\).

    The natural question here is: why do we need a Wronskian to determine linear independence if we can just check to see if one solution is a multiple of another solution? While that is true, it won't be so easy to see linear independence if we move to higher order equations, and then it will become indispensable. Additionally, this idea plays a significant role in solving nonhomogeneous equations later in the chapter.


    This page titled 5.1: Homogeneous Linear Equations is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.