Skip to main content
Mathematics LibreTexts

4.3: Basic Theory of Homogeneous Linear System

  • Page ID
    17436
  • This page is a draft and is under active development. 

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In this section we consider homogeneous linear systems \({\bf y}' = A(t){\bf y}\), where \(A=A(t)\) is a continuous \(n\times n\) matrix function on an interval \((a,b)\). The theory of linear homogeneous systems has much in common with the theory of linear homogeneous scalar equations, which we considered in Sections 2.1 and 3.1.

    Whenever we refer to solutions of \({\bf y}'=A(t){\bf y}\) we'll mean solutions on \((a,b)\). Since \({\bf y}\equiv{\bf 0}\) is obviously a solution of \({\bf y}'=A(t){\bf y}\), we call it the \( \textcolor{blue}{\mbox{trivial}} \) solution. Any other solution is \( \textcolor{blue}{\mbox{nontrivial}} \).

    If \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) are vector functions defined on an interval \((a,b)\) and \(c_1\), \(c_2\), \(\dots\), \(c_n\) are constants, then

    \begin{equation} \label{eq:4.3.1}
    {\bf y}=c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n
    \end{equation}

    is a \( \textcolor{blue}{\mbox{linear combination of }} \) \({\bf y}_1\), \({\bf y}_2\), \(\ldots\), \({\bf y}_n\). It's easy show that if \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) are solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\), then so is any linear combination of \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) (Exercise \((4.3E.1)\)). We say that \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is a \( \textcolor{blue}{\mbox{fundamental set of solutions of}} \) \({\bf y}'=A(t){\bf y}\) on \((a,b)\) on if every solution of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) can be written as a linear combination of \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\), as in \eqref{eq:4.3.1}. In this case we say that \eqref{eq:4.3.1} is the \( \textcolor{blue}{\mbox{general solution of}} \) \({\bf y}'=A(t){\bf y}\) on \((a,b)\).

    It can be shown that if \(A\) is continuous on \((a,b)\) then \({\bf y}'=A(t){\bf y}\) has infinitely many fundamental sets of solutions on \((a,b)\) (Exercises \((4.3E.15)\) and \((4.3E.16)\)). The next definition will help to characterize fundamental sets of solutions of \({\bf y}'=A(t){\bf y}\).

    We say that a set \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of \(n\)-vector functions is \( \textcolor{blue}{\mbox{linearly independent}} \) on \((a,b)\) if the only constants \(c_1\), \(c_2\), \(\dots\), \(c_n\) such that

    \begin{equation} \label{eq:4.3.2}
    c_1{\bf y}_1(t)+c_2{\bf y}_2(t)+\cdots+c_n{\bf y}_n(t)=0,\quad a<t<b,
    \end{equation}

    are \(c_1=c_2=\cdots=c_n=0\). If \eqref{eq:4.3.2} holds for some set of constants \(c_1\), \(c_2\), \(\dots\), \(c_n\) that are not all zero, then \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is \( \textcolor{blue}{\mbox{linearly dependent}} \) on \((a,b)\).

    The next theorem is analogous to Theorems \((2.1.3)\) and \((3.1.2)\).

    Theorem \(\PageIndex{1}\)

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b)\). Then a set \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of \(n\) solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) is a fundamental set if and only if it's linearly independent on \((a,b)\).

    Proof

    Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

    Example \(\PageIndex{1}\)

    Show that the vector functions

    \begin{eqnarray*}
    {\bf y}_1 = \left[ \begin{array} \\ e^t \\ 0 \\ e^{-t} \end{array} \right], \quad {\bf y}_2 = \left[ \begin{array} \\ 0 \\ e^{3t} \\ 1 \end{array} \right], \quad \mbox{and} \quad {\bf y}_3 = \left[ \begin{array} \\ e^{2t} \\ e^{3t} \\ 0 \end{array} \right]
    \end{eqnarray*}

    are linearly independent on every interval \((a,b)\).

    Answer

    Suppose

    \begin{eqnarray*}
    c_1 \left[ \begin{array} \\ e^t \\ 0 \\ e^{-t} \end{array} \right] + c_2 \left[ \begin{array} \\ 0 \\ e^{3t} \\ 1 \end{array} \right] + c_3 \left[ \begin{array} \\ e^{2t} \\ e^{3t} \\ 0 \end{array} \right] = \left[ \begin{array} \\ 0 \\ 0 \\ 0 \end{array} \right], \quad a<t<b.
    \end{eqnarray*}

    We must show that \(c_1=c_2=c_3=0\). Rewriting this equation in matrix form yields

    \begin{eqnarray*}
    \left[ \begin{array} \\ e^t & 0 & e^{2t} \\ 0 & e^{3t} & e^{3t} \\ e^{-t} & 1 & 0 \end{array} \right] \left[ \begin{array} \\ c_1 \\ c_2 \\ c_3 \end{array} \right] = \left[ \begin{array} \\ 0 \\ 0 \\ 0 \end{array} \right], \quad a<t<b.
    \end{eqnarray*}

    Expanding the determinant of this system in cofactors of the entries of the first row yields

    \begin{eqnarray*}
    \left|\begin{array}{ccc}e^t&0&e^{2t}\\0&e^{3t}&e^{3t}\\e^{-t}&1&0
    \end{array}\right|&=&e^t
    \left|\begin{array}{cc}e^{3t}&e^{3t}\\1&0\end{array}\right|-0
    \left|\begin{array}{cc}0&e^{3t}\\e^{-t}&0\end{array}\right|
    +e^{2t}\left|\begin{array}{cc}0&e^{3t}\\e^{-t}&1\end{array}\right| \\
    &=&e^t(-e^{3t})+e^{2t}(-e^{2t})=-2e^{4t}.
    \end{eqnarray*}

    Since this determinant is never zero, \(c_1=c_2=c_3=0\).

    We can use the method in Example \((4.3.1)\) to test \(n\) solutions \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of any \(n\times n\) system \({\bf y}'=A(t){\bf y}\) for linear independence on an interval \((a,b)\) on which \(A\) is continuous. To explain this (and for other purposes later), it's useful to write a linear combination of \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) in a different way. We first write the vector functions in terms of their components as

    \begin{eqnarray*}
    {\bf y}_1 = \left[ \begin{array} \\ y_{11} \\ y_{21} \\ \vdots \\ y_{n1} \end{array} \right], \quad {\bf y}_2 = \left[ \begin{array} \\ y_{12} \\ y_{22} \\ \vdots \\ y_{n2} \end{array} \right] , \; \dots, \quad {\bf y}_n = \left[ \begin{array} \\ y_{1n} \\ y_{2n} \\ \vdots \\ y_{nn} \end{array} \right].
    \end{eqnarray*}

    If

    \begin{eqnarray*}
    {\bf y} = c_1 {\bf y}_1 + c_2 {\bf y}_2 + \cdots + c_n {\bf y}_n
    \end{eqnarray*}

    then

    \begin{eqnarray*}
    {\bf y} &=& c_1 \left[ \begin{array} \\ y_{11} \\ y_{21} \\ \vdots \\ y_{n1} \end{array} \right] + c_2 \left[ \begin{array} \\ y_{12} \\ y_{22} \\ \vdots \\ y_{n2} \end{array} \right] + \cdots + c_n \left[ \begin{array} \\ y_{1n} \\ y_{2n} \\ \vdots y_{nn} \end{array} \right] &=& \left[ \begin{array} \\ y_{11} & y_{12} & \cdots & y_{1n} \\ y_{21} & y_{22} & \cdots & y_{2n} \\ \vdots & \vdots & \vdots & \vdots \\ y_{1n} & y_{2n} & \cdots & y_{nn} \end{array} \right].
    \end{eqnarray*}

    This shows that

    \begin{equation} \label{eq:4.3.3}
    c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n=Y{\bf c},
    \end{equation}

    where

    \begin{eqnarray*}
    {\bf c} = c_n
    \end{eqnarray*}

    and

    \begin{equation} \label{eq:4.3.4}
    Y=[{\bf y}_1\; {\bf y}_2\; \cdots\; {\bf y}_n]=
    \left[\begin{array}{cccc}
    y_{11}&y_{12}&\cdots&y_{1n} \\
    y_{21}&y_{22}&\cdots&y_{2n}\\
    \vdots&\vdots&\ddots&\vdots \\
    y_{n1}&y_{n2}&\cdots&y_{nn} \\
    \end{array}\right];
    \end{equation}

    that is, the columns of \(Y\) are the vector functions \({\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\).

    For reference below, note that

    \begin{eqnarray*}
    Y'&=&[{\bf y}_1'\; {\bf y}_2'\; \cdots\; {\bf y}_n']\\
    &=&[A{\bf y}_1\; A{\bf y}_2\; \cdots\; A{\bf y}_n]\\
    &=&A[{\bf y}_1\; {\bf y}_2\; \cdots\; {\bf y}_n]=AY;
    \end{eqnarray*}

    that is, \(Y\) satisfies the matrix differential equation

    \begin{eqnarray*}
    Y' = AY.
    \end{eqnarray*}

    The determinant of \(Y\),

    \begin{equation} \label{eq:4.3.5}
    W=\left|\begin{array}{cccc}
    y_{11}&y_{12}&\cdots&y_{1n} \\
    y_{21}&y_{22}&\cdots&y_{2n}\\
    \vdots&\vdots&\ddots&\vdots \\
    y_{n1}&y_{n2}&\cdots&y_{nn} \\
    \end{array}\right|
    \end{equation}

    is called the Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\). It can be shown (Exercises \((4.3E.2)\) and \((4.3E.3)\)) that this definition is analogous to definitions of the Wronskian of scalar functions given in Sections 2.1 and 3.1. The next theorem is analogous to Theorems \((2.1.4)\) and \((3.1.3)\). The proof is sketched in Exercise \((4.3E.4)\) for \(n=2\) and in Exercise \((4.3E.5)\) for general \(n\).

    Theorem - ABEL'S FORMULA \(\PageIndex{2}\)

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b)\), let \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) be solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\), and let \(t_0\) be in \((a,b)\). Then the Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is given by

    \begin{equation} \label{eq:4.3.6}
    W(t)=W(t_0)\exp\left(
    \int^t_{t_0}\big[a_{11}(s)+a_{22}(s)+\cdots+a_{nn}(s)]\,
    ds\right), \quad a < t < b.
    \end{equation}

    Therefore, either \(W\) has no zeros in \((a,b)\) or \(W\equiv0\) on \((a,b)\).

    Proof

    Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

    The sum of the diagonal entries of a square matrix \(A\) is called the \( \textcolor{blue}{\mbox{trace}} \) of \(A\), denoted by tr\((A)\). Thus, for an \(n\times n\) matrix \(A\),

    \begin{eqnarray*}
    \mbox{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn},
    \end{eqnarray*}

    and \eqref{eq:4.3.6} can be written as

    \begin{eqnarray*}
    W(t) = W(t_0) \exp \left( \int^t_{t_0} \mbox{tr}(A) (s))\,ds \right), \quad a < t < b.
    \end{eqnarray*}

    The next theorem is analogous to Theorems \((2.1.6)\) and \((3.1.4)\).

    Theorem \(\PageIndex{3}\)

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b)\) and let \({\bf y}_1\), \({\bf y}_2\), \(\dots\), \({\bf y}_n\) be solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\). Then the following statements are equivalent; that is, they are either all true or all false:

    (a) The general solution of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) is \({\bf y}=c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n\), where \(c_1\), \(c_2\), \(\dots\), \(c_n\) are arbitrary constants.

    (b) \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is a fundamental set of solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\).

    (c) \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is linearly independent on \((a,b)\).

    (d) The Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is nonzero at some point in \((a,b)\).

    (e) The Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is nonzero at all points in \((a,b)\).

    Proof

    Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

    We say that \(Y\) in \eqref{eq:4.3.4} is a \( \textcolor{blue}{\mbox{fundamental matrix}} \) for \({\bf y}'=A(t){\bf y}\) if any (and therefore all) of the statements \(({\bf a})-({\bf e})\) of Theorem \((4.3.2)\) are true for the columns of \(Y\). In this case, \eqref{eq:4.3.3} implies that the general solution of \({\bf y}'=A(t){\bf y}\) can be written as \({\bf y}=Y{\bf c}\), where \({\bf c}\) is an arbitrary constant \(n\) vector.

    Example \(\PageIndex{2}\)

    The vector functions

    \begin{eqnarray*}
    {\bf y}_1 = \left[ \begin{array} \\ -e^{2t} \\ 2e^{2t} \end{array} \right] \quad \mbox{and} \quad {\bf y}_2 = \left[ \begin{array} \\ -e^{-t} \\ \phantom{-}e^{-t} \end{array} \right]
    \end{eqnarray*}

    are solutions of the constant coefficient system

    \begin{equation} \label{eq:4.3.7}
    {\bf y}' = \left[ \begin{array} \\ -4 & -3 \\ 6 & 5 \end{array} \right] {\bf y}
    \end{equation}

    on \((-\infty,\infty)\). (Verify.)

    (a) Compute the Wronskian of \(\{{\bf y}_1,{\bf y}_2\}\) directly from the definition \eqref{eq:4.3.5}

    (b) Verify Abel's formula \eqref{eq:4.3.6} for the Wronskian of \(\{{\bf y}_1,{\bf y}_2\}\).

    (c) Find the general solution of \eqref{eq:4.3.7}.

    (d) Solve the initial value problem

    \begin{equation} \label{eq:4.3.8}
    {\bf y}' = \left[ \begin{array} \\ -4 & -3 \\ 6 & 5 \end{array} \right] {\bf y}, \quad {\bf y}(0) = \left[ \begin{array} \\ 4 \\ -5 \end{array} \right].
    \end{equation}

    Answer

    (a) From \eqref{eq:4.3.5}

    \begin{equation} \label{eq:4.3.9}
    W(t)=\left|\begin{array}{cc}-e^{2t}&-e^{-t}\\2e^{2t}&\hfill e^{-t}\end{array}\right|= e^{2t}e^{-t} \left[ \begin{array} \\ {-1} & {-1} \\ 2 & 1 \end{array} \right] = e^t.
    \end{equation}

    (b) Here

    \begin{eqnarray*}
    A = \left[ \begin{array} \\ -4 & -3 \\ 6 & 5 \end{array} \right]
    \end{eqnarray*}

    so tr\((A)=-4+5=1\). If \(t_0\) is an arbitrary real number then \eqref{eq:4.3.6} implies that

    \begin{eqnarray*}
    W(t) = W(t_0) \exp{\left( \int_{t_0}^t 1\,ds \right)} = \left| \begin{array} \\ -e^{2t_0} & e^{-t_0} \\ 2e^{2t_0} & e^{-t_0} \end{array} \right| e^{(t-t_0)} = e^{t_0}e^{t-t_0} = e^t,
    \end{eqnarray*}

    which is consistent with \eqref{eq:4.3.9}.

    (c) Since \(W(t)\ne0\), Theorem \((4.3.3)\) implies that \(\{{\bf y}_1,{\bf y}_2\}\) is a fundamental set of solutions of \eqref{eq:4.3.7} and

    \begin{eqnarray*}
    Y = \left[ \begin{array} \\ -e^{2t} & -e^{-t} \\ 2e^{2t} & \hfill e^{-t} \end{array} \right]
    \end{eqnarray*}

    is a fundamental matrix for \eqref{eq:4.3.7}. Therefore the general solution of \eqref{eq:4.3.7} is

    \begin{equation} \label{eq:4.3.10}
    {\bf y} = c_1{\bf y}_1 + c_2{\bf y}_2 = c_1 \left[ \begin{array} \\ -e^{2t} \\ 2e^{2t} \end{array} \right] + c_2 \left[ \begin{array} \\ -e^{-t} \\ e^{-t} \end{array} \right] = \left[ \begin{array} \\ -e^{2t} & -e^{-t} \\ 2e^{2t} & \hfill e^{-t} \end{array} \right] \left[ \begin{array} \\ c_1 \\ c_2 \end{array} \right].
    \end{equation}

    (d) Setting \(t=0\) in \eqref{eq:4.3.10} and imposing the initial condition in \eqref{eq:4.3.8} yields

    \begin{eqnarray*}
    C_1 \left[ \begin{array} \\ -1 \\ 2 \end{array} \right] + c_2 \left[ \begin{array} \\ -1 \\ 1 \end{array} \right] = \left[ \begin{array} \\ 4 \\ -5 \end{array} \right].
    \end{eqnarray*}

    Thus,

    \begin{eqnarray*}
    -c_1-c_2&=&\phantom{-}4 \\
    2c_1+c_2&=&-5.
    \end{eqnarray*}

    The solution of this system is \(c_1=-1\), \(c_2=-3\). Substituting these values into \eqref{eq:4.3.10} yields

    \begin{eqnarray*}
    {\bf y} = -\left[ \begin{array} \\ -e^{2t} \\ 2e^{2t} \end{array} \right] -3 \left[ \begin{array} \\ -e^{-t} \\ e^{-t} \end{array} \right] = \left[ \begin{array} \\ e^{2t} + 3e^{-t} \\ -2e^{2t} - 3e^{-t} \end{array} \right]
    \end{eqnarray*}

    as the solution of \eqref{eq:4.3.8}.


    This page titled 4.3: Basic Theory of Homogeneous Linear System is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Pamini Thangarajah.

    • Was this article helpful?