Skip to main content
Mathematics LibreTexts

10.3: Basic Theory of Homogeneous Linear Systems

  • Page ID
    30791
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\place}{\bigskip\hrule\bigskip\noindent} \newcommand{\threecol}[3]{\left[\begin{array}{r}#1\\#2\\#3\end{array}\right]} \newcommand{\threecolj}[3]{\left[\begin{array}{r}#1\\[1\jot]#2\\[1\jot]#3\end{array}\right]} \newcommand{\lims}[2]{\,\bigg|_{#1}^{#2}} \newcommand{\twocol}[2]{\left[\begin{array}{l}#1\\#2\end{array}\right]} \newcommand{\ctwocol}[2]{\left[\begin{array}{c}#1\\#2\end{array}\right]} \newcommand{\cthreecol}[3]{\left[\begin{array}{c}#1\\#2\\#3\end{array}\right]} \newcommand{\eqline}[1]{\centerline{\hfill$\displaystyle#1$\hfill}} \newcommand{\twochar}[4]{\left|\begin{array}{cc} #1-\lambda\\#3-\lambda\end{array}\right|} \newcommand{\twobytwo}[4]{\left[\begin{array}{rr} #1\\#3\end{array}\right]} \newcommand{\threechar}[9]{\left[\begin{array}{ccc} #1-\lambda\\#4-\lambda\\#7 -\lambda\end{array}\right]} \newcommand{\threebythree}[9]{\left[\begin{array}{rrr} #1\\#4\\#7 \end{array}\right]} \newcommand{\solutionpart}[1]{\vskip10pt\noindent\underbar{\color{blue}\sc Solution({\bf #1})\ }} \newcommand{\Cex}{\fbox{\textcolor{red}{C}}\, } \newcommand{\CGex}{\fbox{\textcolor{red}{C/G}}\, } \newcommand{\Lex}{\fbox{\textcolor{red}{L}}\, } \newcommand{\matfunc}[3]{\left[\begin{array}{cccc}#1_{11}(t)_{12}(t)&\cdots _{1#3}(t)\\#1_{21}(t)_{22}(t)&\cdots_{2#3}(t)\\\vdots& \vdots&\ddots&\vdots\\#1_{#21}(t)_{#22}(t)&\cdots_{#2#3}(t) \end{array}\right]} \newcommand{\col}[2]{\left[\begin{array}{c}#1_1\\#1_2\\\vdots\\#1_#2\end{array}\right]} \newcommand{\colfunc}[2]{\left[\begin{array}{c}#1_1(t)\\#1_2(t)\\\vdots\\#1_#2(t)\end{array}\right]} \newcommand{\cthreebythree}[9]{\left[\begin{array}{ccc} #1\\#4\\#7 \end{array}\right]} 1 \ newcommand {\ dy} {\ ,\ mathrm {d}y} \ newcommand {\ dx} {\ ,\ mathrm {d}x} \ newcommand {\ dyx} {\ ,\ frac {\ mathrm {d}y}{\ mathrm {d}x}} \ newcommand {\ ds} {\ ,\ mathrm {d}s} \ newcommand {\ dt }{\ ,\ mathrm {d}t} \ newcommand {\dst} {\ ,\ frac {\ mathrm {d}s}{\ mathrm {d}t}} \)

    In this section we consider homogeneous linear systems \({\bf y}'= A(t){\bf y}\), where \(A=A(t)\) is a continuous \(n\times n\) matrix function on an interval \((a,b)\). The theory of linear homogeneous systems has much in common with the theory of linear homogeneous scalar equations, which we considered in Sections 2.1, 5.1, and 9.1.

    Whenever we refer to solutions of \({\bf y}'=A(t){\bf y}\) we’ll mean solutions on \((a,b)\). Since \({\bf y}\equiv{\bf 0}\) is obviously a solution of \({\bf y}'=A(t){\bf y}\), we call it the trivial solution. Any other solution is nontrivial.

    If \({\bf y}_1\), \({\bf y}_2\), …, \({\bf y}_n\) are vector functions defined on an interval \((a,b)\) and \(c_1\), \(c_2\), …, \(c_n\) are constants, then

    \[\label{eq:10.3.1} {\bf y}=c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n\]

    is a linear combination of \({\bf y}_1\), \({\bf y}_2\), …,\({\bf y}_n\). It’s easy show that if \({\bf y}_1\), \({\bf y}_2\), …,\({\bf y}_n\) are solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\), then so is any linear combination of \({\bf y}_1\), \({\bf y}_2\), …, \({\bf y}_n\) (Exercise 10.3.1). We say that \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is a fundamental set of solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) on if every solution of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) can be written as a linear combination of \({\bf y}_1\), \({\bf y}_2\), …, \({\bf y}_n\), as in Equation \ref{eq:10.3.1}. In this case we say that Equation \ref{eq:10.3.1} is the general solution of \({\bf y}'=A(t){\bf y}\) on \((a,b)\).

    It can be shown that if \(A\) is continuous on \((a,b)\) then \({\bf y}'=A(t){\bf y}\) has infinitely many fundamental sets of solutions on \((a,b)\) (Exercises 10.3.15 and 10.3.16). The next definition will help to characterize fundamental sets of solutions of \({\bf y}'=A(t){\bf y}\).

    We say that a set \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of \(n\)-vector functions is linearly independent on \((a,b)\) if the only constants \(c_1\), \(c_2\), …, \(c_n\) such that

    \[\label{eq:10.3.2} c_{1}y_{1}(t)+c_{2}y_{2}(t)+\cdots +c_{n}y_{n}(t)=0,\quad a<t<b,\]

    are \(c_1=c_2=\cdots=c_n=0\). If Equation \ref{eq:10.3.2} holds for some set of constants \(c_1\), \(c_2\), …, \(c_n\) that are not all zero, then \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is linearly dependent on \((a,b)\)

    The next theorem is analogous to Theorems 5.1.3 and 9.1.2.

    Theorem 10.3.1

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b)\). Then a set \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of \(n\) solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) is a fundamental set if and only if it is linearly independent on \((a,b)\).

    Example 10.3.1

    Show that the vector functions

    \[{\bf y}_1=\left[\begin{array}{c}e^t\\0\\e^{-t}\end{array}\right],\quad {\bf y}_2=\left[\begin{array}{c}0\\e^{3t}\\1\end{array}\right], \quad \text{and} \quad {\bf y}_3=\left[\begin{array}{c}e^{2t}\\e^{3t}\\0\end{array}\right]\nonumber \]

    are linearly independent on every interval \((a,b)\).

    Solution

    Suppose

    \[c_{1}\left[\begin{array} {c}{e^{t}}\\{0}\\{e^{-t}} \end{array} \right] +c_{2}\left[\begin{array}{c}{0}\\{e^{3t}}\\{1}\end{array} \right] +c_{3}\left[\begin{array}{c}{e^{2t}}\\{e^{3t}}\\{0}\end{array} \right] = \left[\begin{array}{c}{0}\\{0}\\{0}\end{array} \right],\quad a<t<b.\nonumber \]

    We must show that \(c_1=c_2=c_3=0\). Rewriting this equation in matrix form yields

    \[\left[\begin{array}{ccc}{e^{t}}&{0}&{e^{2t}}\\{0}&{e^{3t}}&{e^{3t}}\\{e^{-t}}&{1}&{0} \end{array} \right] \: \left[\begin{array}{c}{c_{1}}\\{c_{2}}\\{c_{3}}\end{array} \right] = \left[\begin{array}{c}{0}\\{0}\\{0}\end{array} \right], \quad a<t<b.\nonumber \]

    Expanding the determinant of this system in cofactors of the entries of the first row yields

    \[\begin{align*} \left|\begin{array}{ccc}e^t&0&e^{2t}\\0&e^{3t}&e^{3t}\\e^{-t}&1&0 \end{array}\right|&=e^t \left|\begin{array}{cc}e^{3t}&e^{3t}\\1&0\end{array}\right|-0 \left|\begin{array}{cc}0&e^{3t}\\e^{-t}&0\end{array}\right| +e^{2t}\left|\begin{array}{cc}0&e^{3t}\\e^{-t}&1\end{array}\right| \\ &=e^t(-e^{3t})+e^{2t}(-e^{2t})=-2e^{4t}.\end{align*}\]

    Since this determinant is never zero, \(c_1=c_2=c_3=0\).

    We can use the method in Example 10.3.1 to test \(n\) solutions \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) of any \(n\times n\) system \({\bf y}'=A(t){\bf y}\) for linear independence on an interval \((a,b)\) on which \(A\) is continuous. To explain this (and for other purposes later), it is useful to write a linear combination of \({\bf y}_1\), \({\bf y}_2\), …, \({\bf y}_n\) in a different way. We first write the vector functions in terms of their components as

    \[{\bf y}_1=\left[\begin{array}{c} y_{11}\\y_{21}\\ \vdots\\ y_{n1}\end{array}\right],\quad {\bf y}_2=\left[\begin{array}{c} y_{12}\\y_{22}\\ \vdots\\ y_{n2}\end{array}\right],\dots,\quad {\bf y}_n=\left[\begin{array}{c} y_{1n}\\y_{2n}\\ \vdots\\ y_{nn}\end{array}\right].\nonumber \]

    If

    \[{\bf y}=c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n\nonumber \]

    then

    \[\begin{align*} {\bf y}&= c_1\left[\begin{array}{c} y_{11}\\y_{21}\\ \vdots\\ y_{n1}\end{array}\right]+ c_2\left[\begin{array}{c} y_{12}\\y_{22}\\ \vdots\\ y_{n2}\end{array}\right]+\cdots +c_n\left[\begin{array}{c} y_{1n}\\y_{2n}\\ \vdots\\ y_{nn}\end{array}\right]\\[4pt] &=\left[\begin{array}{cccc} y_{11}&y_{12}&\cdots&y_{1n} \\ y_{21}&y_{22}&\cdots&y_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ y_{n1}&y_{n2}&\cdots&y_{nn} \\ \end{array}\right]\col cn.\end{align*}\]

    This shows that

    \[\label{eq:10.3.3} c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n=Y{\bf c},\]

    where

    \[{\bf c}=\col cn\nonumber \]

    and

    \[\label{eq:10.3.4} Y=[{\bf y}_1\; {\bf y}_2\; \cdots\; {\bf y}_n]= \left[\begin{array}{cccc} y_{11}&y_{12}&\cdots&y_{1n} \\ y_{21}&y_{22}&\cdots&y_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ y_{n1}&y_{n2}&\cdots&y_{nn} \\ \end{array}\right];\]

    that is, the columns of \(Y\) are the vector functions \({\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\).

    For reference below, note that

    \[\begin{aligned} Y'&=[{\bf y}_1'\; {\bf y}_2'\; \cdots\; {\bf y}_n']\\ &=[A{\bf y}_1\; A{\bf y}_2\; \cdots\; A{\bf y}_n]\\ &=A[{\bf y}_1\; {\bf y}_2\; \cdots\; {\bf y}_n]=AY;\end{aligned}\]

    that is, \(Y\) satisfies the matrix differential equation

    \[Y'=AY.\nonumber \]

    The determinant of \(Y\),

    \[\label{eq:10.3.5} W=\left|\begin{array}{cccc} y_{11}&y_{12}&\cdots&y_{1n} \\ y_{21}&y_{22}&\cdots&y_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ y_{n1}&y_{n2}&\cdots&y_{nn} \\ \end{array}\right|\]

    is called the Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\). It can be shown (Exercises 10.3.2 and 10.3.3) that this definition is analogous to definitions of the Wronskian of scalar functions given in Sections 5.1 and 9.1. The next theorem is analogous to Theorems 5.1.4 and 9.1.3. The proof is sketched in Exercise 10.3.4 for \(n=2\) and in Exercise 10.3.5 for general \(n\).

    Theorem 10.3.2 : Abel’s Formula

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b),\) let \({\bf y}_1\), \({\bf y}_2\), …, \({\bf y}_n\) be solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b),\) and let \(t_0\) be in \((a,b)\). Then the Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is given by

    \[\label{eq:10.3.6} W(t)=W(t_0)\exp\left( \int^t_{t_0}\big[a_{11}(s)+a_{22}(s)+\cdots+a_{nn}(s)]\, ds\right), \quad a < t < b.\]

    Therefore, either \(W\) has no zeros in \((a,b)\) or \(W\equiv0\) on \((a,b).\)

    Note

    The sum of the diagonal entries of a square matrix \(A\) is called the trace of \(A\), denoted by \(\text{tr}(A)\). Thus, for an \(n\times n\) matrix \(A\),

    \[\text{tr}(A)=a_{11}+a_{22}+\cdots a_{nn},\nonumber \]

    and Equation \ref{eq:10.3.6} can be written as

    \[W(t)=W(t_{0})\text{exp}\left(\int_{t_{0}}^{t}\text{tr}(A(s))ds \right),\quad a<t<b.\nonumber \]

    The next theorem is analogous to Theorems 5.1.6 and 9.1.4.

    Theorem 10.3.3

    Suppose the \(n\times n\) matrix \(A=A(t)\) is continuous on \((a,b)\) and let \({\bf y}_1\), \({\bf y}_2\), …,\({\bf y}_n\) be solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\). Then the following statements are equivalent; that is, they are either all true or all false:

    1. The general solution of \({\bf y}'=A(t){\bf y}\) on \((a,b)\) is \({\bf y}=c_1{\bf y}_1+c_2{\bf y}_2+\cdots+c_n{\bf y}_n\), where \(c_1\), \(c_2\), …, \(c_n\) are arbitrary constants.
    2. \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is a fundamental set of solutions of \({\bf y}'=A(t){\bf y}\) on \((a,b)\).
    3. \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is linearly independent on \((a,b)\).
    4. The Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is nonzero at some point in \((a,b)\).
    5. The Wronskian of \(\{{\bf y}_1,{\bf y}_2,\dots,{\bf y}_n\}\) is nonzero at all points in \((a,b)\).

    We say that \(Y\) in Equation \ref{eq:10.3.4} is a fundamental matrix for \({\bf y}'=A(t){\bf y}\) if any (and therefore all) of the statements (a)-(e) of Theorem 10.3.2 are true for the columns of \(Y\). In this case, Equation \ref{eq:10.3.3} implies that the general solution of \({\bf y}'=A(t){\bf y}\) can be written as \({\bf y}=Y{\bf c}\), where \({\bf c}\) is an arbitrary constant \(n\)-vector.

    Example 10.3.2

    The vector functions

    \[{\bf y}_1=\twocol {-e^{2t}}{2e^{2t}}\quad \text{and} \quad {\bf y}_2=\twocol{-e^{-t}}{\phantom{-}e^{-t}}\nonumber \]

    are solutions of the constant coefficient system

    \[\label{eq:10.3.7} {\bf y}' = \left[\begin{array}{cc}{-4}&{-3}\\{6}&{5}\end{array} \right] {\bf y}\]

    on \((-\infty,\infty)\). (Verify.)

    1. Compute the Wronskian of \(\{{\bf y}_1,{\bf y}_2\}\) directly from the definition Equation \ref{eq:10.3.5}
    2. Verify Abel’s formula Equation \ref{eq:10.3.6} for the Wronskian of \(\{{\bf y}_1,{\bf y}_2\}\).
    3. Find the general solution of Equation \ref{eq:10.3.7}.
    4. Solve the initial value problem \[\label{eq:10.3.8} {\bf y}'=\left[\begin{array}{cc}{-4}&{-3}\\{6}&{5}\end{array} \right] {\bf y}, \quad {\bf y}(0)= \left[\begin{array}{r} 4 \\-5\end{array}\right].\]

    Solution a

    From Equation \ref{eq:10.3.5}

    \[\label{eq:10.3.9} W(t)=\left|\begin{array}{cc}-e^{2t}&-e^{-t}\\2e^{2t}&\hfill e^{-t}\end{array}\right|= e^{2t}e^{-t} \left[\begin{array}{cc}{-1}&{-1}\\{2}&{1}\end{array} \right]=e^t.\]

    Solution b

    Here

    \[A=\left[\begin{array}{cc}{-4}&{-3}\\{6}&{5}\end{array} \right],\nonumber \]

    so tr\((A)=-4+5=1\). If \(t_0\) is an arbitrary real number then Equation \ref{eq:10.3.6} implies that

    \[W(t)=W(t_0)\exp{\left(\int_{t_0}^t1\,ds\right)}= \left|\begin{array}{cc} -e^{2t_0}&-e^{-t_0}\\2e^{2t_0}&e^{-t_0}\end{array}\right|e^{(t-t_0)} =e^{t_0}e^{t-t_0}=e^t,\nonumber \]

    which is consistent with Equation \ref{eq:10.3.9}.

    Solution c

    Since \(W(t)\ne0\), Theorem 10.3.3 implies that \(\{{\bf y}_1,{\bf y}_2\}\) is a fundamental set of solutions of Equation \ref{eq:10.3.7} and

    \[Y=\left[\begin{array}{cc}-e^{2t}&-e^{-t}\\2e^{2t}&\hfill e^{-t}\end{array}\right]\nonumber \]

    is a fundamental matrix for Equation \ref{eq:10.3.7}. Therefore the general solution of Equation \ref{eq:10.3.7} is

    \[\label{eq:10.3.10} {\bf y}=c_1{\bf y}_1+c_2{\bf y}_2= c_1\twocol {-e^{2t}}{2e^{2t}}+c_2\twocol{-e^{-t}}{e^{-t}} =\left[\begin{array}{cc}-e^{2t}&-e^{-t}\\2e^{2t}&\hfill e^{-t}\end{array}\right] \left[\begin{array}{c}c_1\\c_2\end{array}\right].\]

    Solution d

    Setting \(t=0\) in Equation \ref{eq:10.3.10} and imposing the initial condition in Equation \ref{eq:10.3.8} yields

    \[c_1\left[\begin{array}{r}-1 \\2\end{array}\right]+c_2 \left[\begin{array}{r}-1 \\1\end{array}\right]= \left[\begin{array}{r} 4 \\-5\end{array}\right].\nonumber \]

    Thus,

    \[\begin{aligned} -c_1-c_2&=\phantom{-}4 \\ 2c_1+c_2&=-5.\end{aligned}\nonumber \]

    The solution of this system is \(c_1=-1\), \(c_2=-3\). Substituting these values into Equation \ref{eq:10.3.10} yields

    \[{\bf y}=-\left[\begin{array}{c}-e^{2t} \\ 2e^{2t}\end{array} \right]-3 \left[\begin{array}{c}-e^{-t} \\ e^{-t}\end{array}\right]= \left[ \begin{array}{c} e^{2t}+3e^{-t} \\-2e^{2t}-3e^{-t} \end{array}\right]\nonumber \]

    as the solution of Equation \ref{eq:10.3.8}.


    This page titled 10.3: Basic Theory of Homogeneous Linear Systems is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench.