Skip to main content
Mathematics LibreTexts

2.1: Introduction

  • Page ID
    91051
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    IN THE LAST SECTION WE SAW how second order differential equations naturally appear in the derivations for simple oscillating systems. In this section we will look at more general second order linear differential equations.

    Second order differential equations are typically harder than first order. In most cases students are only exposed to second order linear differential equations. A general form for a second order linear differential equation is given by

    \[a(x) y^{\prime \prime}(x)+b(x) y^{\prime}(x)+c(x) y(x)=f(x) \nonumber \]

    One can rewrite this equation using operator terminology. Namely, one first defines the differential operator \(L=a(x) D^{2}+b(x) D+c(x)\), where \(D=\dfrac{d}{d x} .\) Then, Equation \(\PageIndex{1}\) becomes

    \[L y=f \nonumber \]

    The solutions of linear differential equations are found by making use of the linearity of \(L\). Namely, we consider the vector space \(^{1}\) consisting of realvalued functions over some domain. Let \(f\) and \(g\) be vectors in this function space. \(L\) is a linear operator if for two vectors \(f\) and \(g\) and scalar \(a\), we have that

    1

    We assume that the reader has been introduced to concepts in linear algebra. Later in the text we will recall the definition of a vector space and see that linear algebra is in the background of the study of many concepts in the solution of differential equations.

    a. \(L(f+g)=L f+L g\)

    b. \(L(a f)=a L f\).

    One typically solves Equation \(\PageIndex{1}\) by finding the general solution of the homogeneous problem,

    \(L y_{h}=0\)

    and a particular solution of the nonhomogeneous problem,

    \(L y_{p}=f .\)

    Then, the general solution of Equation \(\PageIndex{1}\) is simply given as \(y=y_{h}+y_{p}\). This is true because of the linearity of \(L\). Namely,

    \[ \begin{aligned} L y &=L\left(y_{h}+y_{p}\right) \\ &=L y_{h}+L y_{p} \\ &=0+f=f . \end{aligned} \label{2.3} \]

    There are methods for finding a particular solution of a nonhomogeneous differential equation. These methods range from pure guessing, the Method of Undetermined Coefficients, the Method of Variation of Parameters, or Green’s functions. We will review these methods later in the chapter.

    Determining solutions to the homogeneous problem, \(L y_{h}=0\), is not always easy. However, many now famous mathematicians and physicists have studied a variety of second order linear equations and they have saved us the trouble of finding solutions to the differential equations that often appear in applications. We will encounter many of these in the following chapters. We will first begin with some simple homogeneous linear differential equations.

    Linearity is also useful in producing the general solution of a homogeneous linear differential equation. If \(y_{1}(x)\) and \(y_{2}(x)\) are solutions of the homogeneous equation, then the linear combination \(y(x)=c_{1} y_{1}(x)+c_{2} y_{2}(x)\) is also a solution of the homogeneous equation. This is easily proven.

    Let \(L y_{1}=0\) and \(L y_{1} 2=0 .\) We consider \(y=c_{1} y_{1}+c_{2} y_{2} .\) Then, since \(L\) is a linear operator,

    \[ \begin{aligned} L y &=L\left(c_{1} y_{1}+c_{2} y_{2}\right) \\ &=c_{1} L y_{1}+c_{2} L y_{2} \\ &=0 \end{aligned} \label{2.4} \]

    Therefore, \(y\) is a solution.

    In fact, if \(y_{1}(x)\) and \(y_{2}(x)\) are linearly independent, then \(y=c_{1} y_{1}+c_{2} y_{2}\) is the general solution of the homogeneous problem. A set of functions \(\left\{y_{i}(x)\right\}_{i=1}^{n}\) is a linearly independent set if and only if

    \(c_{1} y_{1}(x)+\ldots+c_{n} y_{n}(x)=0\)

    implies \(c_{i}=0\), for \(i=1, \ldots, n .\) Otherwise, they are said to be linearly dependent. Note that for \(n=2\), the general form is \(c_{1} y_{1}(x)+c_{2} y_{2}(x)=0 .\) If \(y_{1}\) and \(y_{2}\) are linearly dependent, then the coefficients are not zero and \(y_{2}(x)=-\dfrac{c_{1}}{c_{2}} y_{1}(x)\) and is a multiple of \(y_{1}(x) .\) We see this in the next example.

    Example \(\PageIndex{1}\)

    Show that \(y_{1}(x)=x\) and \(y_{2}(x)=4 x\) are linearly dependent.

    We set \(c_{1} y_{1}(x)+c_{2} y_{2}(x)=0\) and show that there are nonzero constants, \(c_{1}\) and \(c_{2}\) satisfying this equation. Namely, let

    \(c_{1} x+c_{2}(4 x)=0\)

    Then, for \(c_{1}=-4 c_{2}\), this is true for any nonzero \(c_{2} .\) Let \(c_{2}=1\) and we have \(c_{1}=-4\). Next we consider two functions that are not constant multiples of each other.

    Example \(\PageIndex{2}\)

    Show that \(y_{1}(x)=x\) and \(y_{2}(x)=x^{2}\) are linearly independent.

    We set \(c_{1} y_{1}(x)+c_{2} y_{2}(x)=0\) and show that it can only be true if \(c_{1}=0\) and \(c_{2}=0 .\) Let

    \(c_{1} x+c_{2} x^{2}=0\)

    for all \(x .\) Differentiating, we have two sets of equations that must be true for all \(x\) :

    \[ \begin{array}{ccc} c_{1} x+c_{2} x^{2} & = & 0 \\ c_{1}+2 c_{2} x & = & 0 \end{array} \label{2.5} \]

    Setting \(x=0\), we get \(c_{1}=0 .\) Setting \(x=1\), then \(c_{1}+c_{2}=0 .\) Thus, \(c_{2}=0\).

    Another approach would be to solve for the constants. Multiplying the second equation by \(x\) and subtracting yields \(c_{2}=0\). Substituting this result into the second equation, we find \(c_{1}=0\).

    For second order differential equations we seek two linearly independent functions, \(y_{1}(x)\) and \(y_{2}(x) .\) As in the last example, we set \(c_{1} y_{1}(x)+\) \(c_{2} y_{2}(x)=0\) and show that it can only be true if \(c_{1}=0\) and \(c_{2}=0 .\) Differentiating, we have

    \[ \begin{aligned} &c_{1} y_{1}(x)+c_{2} y_{2}(x)=0 \\ &c_{1} y_{1}^{\prime}(x)+c_{2} y_{2}^{\prime}(x)=0 \end{aligned} \label{2.6} \]

    These must hold for all \(x\) in the domain of the solutions.

    Now we solve for the constants. Multiplying the first equation by \(y_{1}^{\prime}(x)\) and the second equation by \(y_{2}(x)\), we have

    \[ \begin{aligned} &c_{1} y_{1}(x) y_{2}^{\prime}(x)+c_{2} y_{2}(x) y_{2}^{\prime}(x)=0 \\ &c_{1} y_{1}^{\prime}(x) y_{2}(x)+c_{2} y_{2}^{\prime}(x) y_{2}(x)=0 \end{aligned} \label{2.7} \]

    Subtracting gives

    \(\left[y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)\right] c_{1}=0\)

    Therefore, either \(c_{1}=0\) or \(y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)=0 .\) So, if the latter is true, then \(c_{1}=0\) and therefore, \(c_{2}=0\). This gives a condition for which \(y_{1}(x)\) and \(y_{2}(x)\) are linearly independent:

    \[y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)=0 \nonumber \]

    We define this quantity as the Wronskian of \(y_{1}(x)\) and \(y_{2}(x)\).

    Linear independence of the solutions of a differential equation can be established by looking at the Wronskian of the solutions. For a second order differential equation the Wronskian is defined as

    \(W\left(y_{1}, y_{2}\right)=y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x) .\)

    The Wronskian can be written as a determinant:

    \(W\left(y_{1}, y_{2}\right)=\left|\begin{array}{ll} y_{1}(x) & y_{2}(x) \\ y_{1}^{\prime}(x) & y_{2}^{\prime}(x) \end{array}\right|=y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)\)

    Thus, the definition of a Wronskian can be generalized to a set of \(n\) functions \(\left\{y_{i}(x)\right\}_{i=1}^{n}\) using an \(n \times n\) determinant.

    Example \(\PageIndex{3}\)

    Determine if the set of functions \(\left\{1, x, x^{2}\right\}\) are linearly independent.

    We compute the Wronskian.

    \[\begin{equation} \begin{aligned} W\left(y_{1}, y_{2}, y_{3}\right) &=\left|\begin{array}{ccc} y_{1}(x) & y_{2}(x) & y_{3}(x) \\ y_{1}^{\prime}(x) & y_{2}^{\prime}(x) & y_{3}^{\prime}(x) \\ y_{1}^{\prime \prime}(x) & y_{2}^{\prime \prime}(x) & y_{3}^{\prime \prime}(x) \end{array}\right| \\ &=\left|\begin{array}{ccc} 1 & x & x^{2} \\ 0 & 1 & 2 x \\ 0 & 0 & 2 \end{array}\right| \\ &=2 \end{aligned} \end{equation}\label{2.9} \]

    Since, \(W\left(1, x, x^{2}\right)=2 \neq 0\), then the set \(\left\{1, x, x^{2}\right\}\) is linearly independent.


    This page titled 2.1: Introduction is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.