$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 10.5: Constant Coefficient Homogeneous Systems II

• • Contributed by William F. Trench
• Andrew G. Cowles Distinguished Professor Emeritus (Mathamatics) at Trinity University
$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\place}{\bigskip\hrule\bigskip\noindent} \newcommand{\threecol}{\left[\begin{array}{r}#1\\#2\\#3\end{array}\right]} \newcommand{\threecolj}{\left[\begin{array}{r}#1\$1\jot]#2\\[1\jot]#3\end{array}\right]} \newcommand{\lims}{\,\bigg|_{#1}^{#2}} \newcommand{\twocol}{\left[\begin{array}{l}#1\\#2\end{array}\right]} \newcommand{\ctwocol}{\left[\begin{array}{c}#1\\#2\end{array}\right]} \newcommand{\cthreecol}{\left[\begin{array}{c}#1\\#2\\#3\end{array}\right]} \newcommand{\eqline}{\centerline{\hfill\displaystyle#1\hfill}} \newcommand{\twochar}{\left|\begin{array}{cc} #1-\lambda\\#3-\lambda\end{array}\right|} \newcommand{\twobytwo}{\left[\begin{array}{rr} #1\\#3\end{array}\right]} \newcommand{\threechar}{\left[\begin{array}{ccc} #1-\lambda\\#4-\lambda\\#7 -\lambda\end{array}\right]} \newcommand{\threebythree}{\left[\begin{array}{rrr} #1\\#4\\#7 \end{array}\right]} \newcommand{\solutionpart}{\vskip10pt\noindent\underbar{\color{blue}\sc Solution({\bf #1})\ }} \newcommand{\Cex}{\fbox{\textcolor{red}{C}}\, } \newcommand{\CGex}{\fbox{\textcolor{red}{C/G}}\, } \newcommand{\Lex}{\fbox{\textcolor{red}{L}}\, } \newcommand{\matfunc}{\left[\begin{array}{cccc}#1_{11}(t)_{12}(t)&\cdots _{1#3}(t)\\#1_{21}(t)_{22}(t)&\cdots_{2#3}(t)\\\vdots& \vdots&\ddots&\vdots\\#1_{#21}(t)_{#22}(t)&\cdots_{#2#3}(t) \end{array}\right]} \newcommand{\col}{\left[\begin{array}{c}#1_1\\#1_2\\\vdots\\#1_#2\end{array}\right]} \newcommand{\colfunc}{\left[\begin{array}{c}#1_1(t)\\#1_2(t)\\\vdots\\#1_#2(t)\end{array}\right]} \newcommand{\cthreebythree}{\left[\begin{array}{ccc} #1\\#4\\#7 \end{array}\right]} 1 \ newcommand {\ dy} {\ ,\ mathrm {d}y} \ newcommand {\ dx} {\ ,\ mathrm {d}x} \ newcommand {\ dyx} {\ ,\ frac {\ mathrm {d}y}{\ mathrm {d}x}} \ newcommand {\ ds} {\ ,\ mathrm {d}s} \ newcommand {\ dt }{\ ,\ mathrm {d}t} \ newcommand {\dst} {\ ,\ frac {\ mathrm {d}s}{\ mathrm {d}t}}$$ We saw in Section 10.4 that if an $$n\times n$$ constant matrix $$A$$ has $$n$$ real eigenvalues $$\lambda_1$$, $$\lambda_2$$, …, $$\lambda_n$$ (which need not be distinct) with associated linearly independent eigenvectors $${\bf x}_1$$, $${\bf x}_2$$, …, $${\bf x}_n$$, then the general solution of $${\bf y}'=A{\bf y}$$ is \[{\bf y}=c_1{\bf x}_1e^{\lambda_1t}+c_2{\bf x}_2e^{\lambda_2 t} +\cdots+c_n{\bf x}_ne^{\lambda_n t}. \nonumber$

In this section we consider the case where $$A$$ has $$n$$ real eigenvalues, but does not have $$n$$ linearly independent eigenvectors. It is shown in linear algebra that this occurs if and only if $$A$$ has at least one eigenvalue of multiplicity $$r>1$$ such that the associated eigenspace has dimension less than $$r$$. In this case $$A$$ is said to be defective. Since it is beyond the scope of this book to give a complete analysis of systems with defective coefficient matrices, we will restrict our attention to some commonly occurring special cases.

Example $$\PageIndex{1}$$

Show that the system

$\label{eq:10.5.1} {\bf y}'=\left[\begin{array}{cc}{11}&{-25}\\{4}&{-9}\end{array} \right] {\bf y}$

does not have a fundamental set of solutions of the form $$\{{\bf x}_1e^{\lambda_1t},{\bf x}_2e^{\lambda_2t}\}$$, where $$\lambda_1$$ and $$\lambda_2$$ are eigenvalues of the coefficient matrix $$A$$ of Equation \ref{eq:10.5.1} and $${\bf x}_1$$, and $${\bf x}_2$$ are associated linearly independent eigenvectors.

Solution

The characteristic polynomial of $$A$$ is

\begin{align*} \twochar{11}{-25}4{-9} &=(\lambda-11)(\lambda+9)+100\\ &=\lambda^2-2\lambda+1 \\ &=(\lambda-1)^2.\end{align*}\nonumber

Hence, $$\lambda=1$$ is the only eigenvalue of $$A$$. The augmented matrix of the system $$(A-I){\bf x}={\bf 0}$$ is

$\left[\begin{array}{rrcr}10&-25&\vdots&0\\4& -10&\vdots&0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{cccc}{1}&{-\frac{5}{2}}&{\vdots }&{0}\\{0}&{0}&{\vdots }&{0} \end{array} \right] \nonumber$

Hence, $$x_1=5x_2/2$$ where $$x_2$$ is arbitrary. Therefore all eigenvectors of $$A$$ are scalar multiples of $$\bf {x}_1=\left[\begin{array}{c}{5}\\{2}\end{array} \right]$$, so $$A$$ does not have a set of two linearly independent eigenvectors.

From Example $$\PageIndex{1}$$, we know that all scalar multiples of $$\bf {y}_1= \twocol52 e^t$$ are solutions of Equation \ref{eq:10.5.1}; however, to find the general solution we must find a second solution $$\bf {y}_2$$ such that $$\{ \bf {y}_1, \bf {y}_2\}$$ is linearly independent. Based on your recollection of the procedure for solving a constant coefficient scalar equation

$ay''+by'+cy=0\nonumber$

in the case where the characteristic polynomial has a repeated root, you might expect to obtain a second solution of Equation \ref{eq:10.5.1} by multiplying the first solution by $$t$$. However, this yields $${\bf y}_2=\twocol52 te^t$$, which does not work, since

${\bf y}_2'=\twocol52(te^t+e^t),\quad \text{while} \quad \left[\begin{array}{cc}{11}&{-25}\\{4}&{-9} \end{array} \right] {\bf y}_2=\twocol52te^t.\nonumber$

The next theorem shows what to do in this situation.

Theorem $$\PageIndex{1}$$

Suppose the $$n\times n$$ matrix $$A$$ has an eigenvalue $$\lambda_1$$ of multiplicity $$\ge2$$ and the associated eigenspace has dimension $$1;$$ that is$$,$$ all $$\lambda_1$$-eigenvectors of $$A$$ are scalar multiples of an eigenvector $${\bf x}.$$ Then there are infinitely many vectors $${\bf u}$$ such that

$\label{eq:10.5.2} (A-\lambda_1I){\bf u}={\bf x}.$

Moreover$$,$$ if $${\bf u}$$ is any such vector then

$\label{eq:10.5.3} {\bf y}_1={\bf x}e^{\lambda_1t}\quad\mbox{and }\quad {\bf y}_2={\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t}$

are linearly independent solutions of $${\bf y}'=A{\bf y}.$$

A complete proof of this theorem is beyond the scope of this book. The difficulty is in proving that there’s a vector $${\bf u}$$ satisfying Equation \ref{eq:10.5.2}, since $$\det(A-\lambda_1I)=0$$. We’ll take this without proof and verify the other assertions of the theorem. We already know that $${\bf y}_1$$ in Equation \ref{eq:10.5.3} is a solution of $${\bf y}'=A{\bf y}$$. To see that $${\bf y}_2$$ is also a solution, we compute

\begin{align*} {\bf y}_2'-A{\bf y}_2&=\lambda_1{\bf u}e^{\lambda_1t}+{\bf x} e^{\lambda_1t} +\lambda_1{\bf x} te^{\lambda_1t}-A{\bf u}e^{\lambda_1t}-A{\bf x} te^{\lambda_1t}\\ &=(\lambda_1{\bf u}+{\bf x} -A{\bf u})e^{\lambda_1t}+(\lambda_1{\bf x} -A{\bf x} )te^{\lambda_1t}.\end{align*}\nonumber

Since $$A{\bf x}=\lambda_1{\bf x}$$, this can be written as

${\bf y}_2'-A{\bf y}_2=- \left((A-\lambda_1I){\bf u}-{\bf x}\right)e^{\lambda_1t},\nonumber$

and now Equation \ref{eq:10.5.2} implies that $${\bf y}_2'=A{\bf y}_2$$. To see that $${\bf y}_1$$ and $${\bf y}_2$$ are linearly independent, suppose $$c_1$$ and $$c_2$$ are constants such that

$\label{eq:10.5.4} c_1{\bf y}_1+c_2{\bf y}_2=c_1{\bf x}e^{\lambda_1t}+c_2({\bf u}e^{\lambda_1t} +{\bf x}te^{\lambda_1t})={\bf 0}.$

We must show that $$c_1=c_2=0$$. Multiplying Equation \ref{eq:10.5.4} by $$e^{-\lambda_1t}$$ shows that

$\label{eq:10.5.5} c_1{\bf x}+c_2({\bf u} +{\bf x}t)={\bf 0}.$

By differentiating this with respect to $$t$$, we see that $$c_2{\bf x}={\bf 0}$$, which implies $$c_2=0$$, because $${\bf x}\ne{\bf 0}$$. Substituting $$c_2=0$$ into Equation \ref{eq:10.5.5} yields $$c_1{\bf x}={\bf 0}$$, which implies that $$c_1=0$$, again because $${\bf x}\ne{\bf 0}$$

Example $$\PageIndex{2}$$

Use Theorem $$\PageIndex{1}$$ to find the general solution of the system

$\label{eq:10.5.6} {\bf y}'=\left[\begin{array}{cc}{11}&{-25}\\{4}&{-9}\end{array} \right]{\bf y}$

considered in Example $$\PageIndex{1}$$.

Solution

In Example $$\PageIndex{1}$$ we saw that $$\lambda_1=1$$ is an eigenvalue of multiplicity $$2$$ of the coefficient matrix $$A$$ in Equation \ref{eq:10.5.6}, and that all of the eigenvectors of $$A$$ are multiples of

${\bf x}=\twocol52.\nonumber$

Therefore

${\bf y}_1=\twocol52e^t\nonumber$

is a solution of Equation \ref{eq:10.5.6}. From Theorem $$\PageIndex{1}$$, a second solution is given by $${\bf y}_2={\bf u}e^t+{\bf x}te^t$$, where $$(A-I){\bf u}={\bf x}$$. The augmented matrix of this system is

$\left[\begin{array}{rrcr}10&-25&\vdots&5\\4&-10&\vdots&2\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrcr}1&-\frac{5}{2}&\vdots&\frac{1}{2}\\0&0&\vdots&0\end{array}\right],\nonumber$

Therefore the components of $${\bf u}$$ must satisfy

$u_1-{5\over2}u_2={1\over2},\nonumber$

where $$u_2$$ is arbitrary. We choose $$u_2=0$$, so that $$u_1=1/2$$ and

${\bf u}=\twocol{1\over2}0.\nonumber$

Thus,

${\bf y}_2=\twocol10{e^t\over2}+\twocol52te^t.\nonumber$

Since $${\bf y}_1$$ and $${\bf y}_2$$ are linearly independent by Theorem $$\PageIndex{1}$$, they form a fundamental set of solutions of Equation \ref{eq:10.5.6}. Therefore the general solution of Equation \ref{eq:10.5.6} is

${\bf y}=c_1\twocol52e^t+c_2\left(\twocol10{e^t\over2}+\twocol52te^t\right).\nonumber$

Note that choosing the arbitrary constant $$u_2$$ to be nonzero is equivalent to adding a scalar multiple of $${\bf y}_1$$ to the second solution $${\bf y}_2$$ (Exercise 10.5.33).

Example $$\PageIndex{3}$$

Find the general solution of

$\label{eq:10.5.7} {\bf y}'=\left[\begin{array}{ccc}{3}&{4}&{-10}\\{2}&{1}&{-2}\\{2}&{2}&{-5}\end{array} \right] {\bf y}.$

Solution

The characteristic polynomial of the coefficient matrix $$A$$ in Equation \ref{eq:10.5.7} is

$\left|\begin{array}{ccc} 3-\lambda & 4 & -10\\ 2 & 1-\lambda & -2\\ 2 & 2 &-5-\lambda\end{array}\right| =- (\lambda-1)(\lambda+1)^2.\nonumber$

Hence, the eigenvalues are $$\lambda_1=1$$ with multiplicity $$1$$ and $$\lambda_2=-1$$ with multiplicity $$2$$. Eigenvectors associated with $$\lambda_1=1$$ must satisfy $$(A-I){\bf x}={\bf 0}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} 2 & 4 & -10 &\vdots & 0\\ 2& 0 & -2 &\vdots & 0\\ 2 & 2 & -6 & \vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 0 & -1 &\vdots& 0\\ 0 & 1 & -2 &\vdots& 0\\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Hence, $$x_1 =x_3$$ and $$x_2 =2 x_3$$, where $$x_3$$ is arbitrary. Choosing $$x_3=1$$ yields the eigenvector

${\bf x}_1=\threecol121.\nonumber$

Therefore

${\bf y}_1 =\threecol121e^t\nonumber$

is a solution of Equation \ref{eq:10.5.7}. Eigenvectors associated with $$\lambda_2 =-1$$ satisfy $$(A+I){\bf x}={\bf 0}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} 4 & 4 & -10 &\vdots & 0\\ 2 & 2 & -2 & \vdots & 0\\2 & 2 & -4 &\vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 1 & 0 &\vdots& 0\\ 0 & 0 & 1 &\vdots& 0 \\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Hence, $$x_3=0$$ and $$x_1 =-x_2$$, where $$x_2$$ is arbitrary. Choosing $$x_2=1$$ yields the eigenvector

${\bf x}_2=\threecol{-1}10,\nonumber$

so

${\bf y}_2 =\threecol{-1}10e^{-t}\nonumber$

is a solution of Equation \ref{eq:10.5.7}. Since all the eigenvectors of $$A$$ associated with $$\lambda_2=-1$$ are multiples of $${\bf x}_2$$, we must now use Theorem $$\PageIndex{1}$$ to find a third solution of Equation \ref{eq:10.5.7} in the form

$\label{eq:10.5.8} {\bf y}_3={\bf u}e^{-t}+\threecol{-1}10te^{-t},$

where $${\bf u}$$ is a solution of $$(A+I){\bf u=x}_2$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} 4 & 4 & -10 &\vdots & -1\\ 2 & 2 & -2 & \vdots & 1\\ 2 & 2 & -4 &\vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 1 & 0 &\vdots& 1\\ 0 & 0 & 1 &\vdots& {1\over2} \\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Hence, $$u_3=1/2$$ and $$u_1 =1-u_2$$, where $$u_2$$ is arbitrary. Choosing $$u_2=0$$ yields

${\bf u} =\threecol10{1\over2},\nonumber$

and substituting this into Equation \ref{eq:10.5.8} yields the solution

${\bf y}_3=\threecol201{e^{-t}\over2}+\threecol{-1}10te^{-t}\nonumber$

of Equation \ref{eq:10.5.7}. Since the Wronskian of $$\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}$$ at $$t=0$$ is

$\left|\begin{array}{rrr} 1&-1&1\\2&1&0\\1&0&1\over2\end{array}\right|={1\over2},\nonumber$

$$\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}$$ is a fundamental set of solutions of Equation \ref{eq:10.5.7}. Therefore the general solution of Equation \ref{eq:10.5.7} is

${\bf y}=c_1\threecol121e^t+c_2\threecol{-1}10e^{-t}+c_3\left (\threecol201{e^{-t}\over2}+\threecol{-1}10te^{-t}\right).\nonumber$

Theorem $$\PageIndex{2}$$

Suppose the $$n\times n$$ matrix $$A$$ has an eigenvalue $$\lambda_1$$ of multiplicity $$\ge 3$$ and the associated eigenspace is one–dimensional$$;$$ that is$$,$$ all eigenvectors associated with $$\lambda_1$$ are scalar multiples of the eigenvector $${\bf x}.$$ Then there are infinitely many vectors $${\bf u}$$ such that

$\label{eq:10.5.9} (A-\lambda_1I){\bf u}={\bf x},$

and, if $${\bf u}$$ is any such vector$$,$$ there are infinitely many vectors $${\bf v}$$ such that

$\label{eq:10.5.10} (A-\lambda_1I){\bf v}={\bf u}.$

If $${\bf u}$$ satisfies Equation \ref{eq:10.5.9} and $${\bf v}$$ satisfies Equation \ref{eq:10.5.10}, then

\begin{aligned} {\bf y}_1 &={\bf x} e^{\lambda_1t},\\ {\bf y}_2&={\bf u}e^{\lambda_1t}+{\bf x} te^{\lambda_1t},\mbox{ and }\\ {\bf y}_3&={\bf v}e^{\lambda_1t}+{\bf u}te^{\lambda_1t}+{\bf x} {t^2e^{\lambda_1t}\over2}\end{aligned}\nonumber

are linearly independent solutions of $${\bf y}'=A{\bf y}$$.

Again, it is beyond the scope of this book to prove that there are vectors $${\bf u}$$ and $${\bf v}$$ that satisfy Equation \ref{eq:10.5.9} and Equation \ref{eq:10.5.10}. Theorem $$\PageIndex{1}$$ implies that $${\bf y}_1$$ and $${\bf y}_2$$ are solutions of $${\bf y}'=A{\bf y}$$. We leave the rest of the proof to you (Exercise 10.5.34).

Example $$\PageIndex{4}$$

Use Theorem $$\PageIndex{2}$$ to find the general solution of

$\label{eq:10.5.11} {\bf y}'=\left[\begin{array}{ccc}1&1&1 \\ 1&3&-1 \\ 0&2&2 \end{array} \right] {\bf y}.$

Solution:

The characteristic polynomial of the coefficient matrix $$A$$ in Equation \ref{eq:10.5.11} is

$\left|\begin{array}{ccc} 1-\lambda & 1 & \phantom{-}1\\ 1 & 3-\lambda & -1\\ 0 & 2 & 2-\lambda\end{array}\right| =-(\lambda-2)^3.\nonumber$

Hence, $$\lambda_1=2$$ is an eigenvalue of multiplicity $$3$$. The associated eigenvectors satisfy $$(A-2I){\bf x=0}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots & 0\\ 1& 1 & -1 &\vdots & 0\\ 0 & 2 & 0 & \vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& 0\\ 0 & 1 & 0 &\vdots& 0 \\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Hence, $$x_1 =x_3$$ and $$x_2 = 0$$, so the eigenvectors are all scalar multiples of

${\bf x}_1=\threecol101.\nonumber$

Therefore

${\bf y}_1=\threecol101e^{2t}\nonumber$

is a solution of Equation \ref{eq:10.5.11}. We now find a second solution of Equation \ref{eq:10.5.11} in the form

${\bf y}_2={\bf u}e^{2t}+\threecol101te^{2t},\nonumber$

where $${\bf u}$$ satisfies $$(A-2I){\bf u=x}_1$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots & 1\\ 1& 1 & -1 &\vdots & 0\\ 0 & 2 & 0 & \vdots & 1\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -{1\over2}\\ 0 & 1 & 0 &\vdots& {1\over2}\\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Letting $$u_3=0$$ yields $$u_1=-1/2$$ and $$u_2=1/2$$; hence,

${\bf u}={1\over2}\threecol{-1}10\nonumber$

and

${\bf y}_2=\threecol{-1}10{e^{2t}\over2}+\threecol101te^{2t}\nonumber$

is a solution of Equation \ref{eq:10.5.11}. We now find a third solution of Equation \ref{eq:10.5.11} in the form

${\bf y}_3={\bf v}e^{2t}+\threecol{-1}10{te^{2t}\over2}+\threecol101{t^2e^{2t}\over2}\nonumber$

where $${\bf v}$$ satisfies $$(A-2I){\bf v}={\bf u}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots &-{1\over2}\\ 1& 1 & -1 &\vdots & {1\over2}\\ 0 & 2 & 0 & \vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& {1\over2}\\ 0 & 1 & 0 &\vdots& 0\\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Letting $$v_3=0$$ yields $$v_1=1/2$$ and $$v_2=0$$; hence,

${\bf v}={1\over2}\threecol100.\nonumber$

Therefore

${\bf y}_3=\threecol100{e^{2t}\over2}+ \threecol{-1}10{te^{2t}\over2}+\threecol101{t^2e^{2t}\over2}\nonumber$

is a solution of Equation \ref{eq:10.5.11}. Since $${\bf y}_1$$, $${\bf y}_2$$, and $${\bf y}_3$$ are linearly independent by Theorem $$\PageIndex{2}$$, they form a fundamental set of solutions of Equation \ref{eq:10.5.11}. Therefore the general solution of Equation \ref{eq:10.5.11} is

\begin{aligned} {\bf y} = c_{1}\left[\begin{array}{c}{1}\\{0}\\{1}\end{array} \right]e^{2t}+c_{2}\left(\left[ \begin{array}{c}{-1}\\{1}\\{0}\end{array} \right]\frac{e^{2t}}{2}+\left[\begin{array}{c}{1}\\{0}\\{1}\end{array} \right] te^{2t} \right) + c_{3}\left(\left[\begin{array}{c}{1}\\{0}\\{0}\end{array} \right]\frac{e^{2t}}{2}+\left[\begin{array}{c}{-1}\\{1}\\{0}\end{array} \right]\frac{te^{2t}}{2}+\left[\begin{array}{c}{1}\\{0}\\{1}\end{array} \right]\frac{t^{2}e^{2t}}{2} \right) \end{aligned}\nonumber

Theorem $$\PageIndex{3}$$

Suppose the $$n\times n$$ matrix $$A$$ has an eigenvalue $$\lambda_1$$ of multiplicity $$\ge 3$$ and the associated eigenspace is two–dimensional; that is, all eigenvectors of $$A$$ associated with $$\lambda_1$$ are linear combinations of two linearly independent eigenvectors $${\bf x}_1$$ and $${\bf x}_2$$$$.$$ Then there are constants $$\alpha$$ and $$\beta$$ $$($$not both zero$$)$$ such that if

$\label{eq:10.5.12} {\bf x}_3=\alpha{\bf x}_1+\beta{\bf x}_2,$

then there are infinitely many vectors $${\bf u}$$ such that

$\label{eq:10.5.13} (A-\lambda_1I){\bf u}={\bf x}_3.$

If $${\bf u}$$ satisfies Equation \ref{eq:10.5.13}, then

$\label{eq:10.5.14}\begin{array}{ll}{y_{1}}&{=x_{1}e^{\lambda _{1}t}}\\{y_{2}}&{=x_{2}e^{\lambda _{1}t},and}\\{y_{3}}&{ue^{\lambda _{1}t}+x_{3}te^{\lambda _{1}t},} \end{array}$

are linearly independent solutions of $${\bf y}'=A{\bf y}.$$

We omit the proof of this theorem.

Example $$\PageIndex{5}$$

Use Theorem $$\PageIndex{3}$$ to find the general solution of

$\label{eq:10.5.15} {\bf y}'=\left[\begin{array}{ccc}{0}&{0}&{1}\\{-1}&{1}&{1}\\{-1}&{0}&{2}\end{array} \right]{\bf y}.$

Solution

The characteristic polynomial of the coefficient matrix $$A$$ in Equation \ref{eq:10.5.15} is

$\left|\begin{array}{ccc} -\lambda & 0 & 1\\ -1 & 1-\lambda & 1\\ -1 & 0 & 2-\lambda\end{array}\right| =-(\lambda-1)^3.\nonumber$

Hence, $$\lambda_1=1$$ is an eigenvalue of multiplicity $$3$$. The associated eigenvectors satisfy $$(A-I){\bf x=0}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} -1 & 0 & 1 &\vdots & 0\\ -1& 0 & 1 &\vdots & 0\\ -1 & 0 & 1 & \vdots & 0\end{array}\right],\nonumber$

which is row equivalent to

$\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& 0\\ 0 & 0 & 0 &\vdots& 0 \\ 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber$

Hence, $$x_1 =x_3$$ and $$x_2$$ is arbitrary, so the eigenvectors are of the form

${\bf x}_1=\threecol{x_3}{x_2}{x_3}=x_3\threecol101+x_2\threecol010.\nonumber$

Therefore the vectors

$\label{eq:10.5.16} {\bf x}_1 =\threecol101\quad\mbox{and }\quad {\bf x}_2=\threecol010$

form a basis for the eigenspace, and

${\bf y}_1 =\threecol101e^t \quad \text{and} \quad {\bf y}_2=\threecol010e^t\nonumber$

are linearly independent solutions of Equation \ref{eq:10.5.15}. To find a third linearly independent solution of Equation \ref{eq:10.5.15}, we must find constants $$\alpha$$ and $$\beta$$ (not both zero) such that the system

$\label{eq:10.5.17} (A-I){\bf u}=\alpha{\bf x}_1+\beta{\bf x}_2$

has a solution $${\bf u}$$. The augmented matrix of this system is

$\left[\begin{array}{rrrcr} -1 & 0 & 1 &\vdots &\alpha\\ -1& 0 & 1 &\vdots &\beta\\ -1 & 0 & 1 & \vdots &\alpha\end{array}\right],\nonumber$

which is row equivalent to

$\label{eq:10.5.18} \left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -\alpha\\ 0 & 0 & 0 &\vdots&\beta-\alpha\\ 0 & 0 & 0 &\vdots&0\end{array} \right].$

Therefore Equation \ref{eq:10.5.17} has a solution if and only if $$\beta=\alpha$$, where $$\alpha$$ is arbitrary. If $$\alpha=\beta=1$$ then Equation \ref{eq:10.5.12} and Equation \ref{eq:10.5.16} yield

${\bf x}_3={\bf x}_1+{\bf x}_2= \threecol101+\threecol010=\threecol111,\nonumber$

and the augmented matrix Equation \ref{eq:10.5.18} becomes

$\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -1\\ 0 & 0 & 0 &\vdots& 0\\ 0 & 0 & 0 &\vdots&0\end{array} \right].\nonumber$

This implies that $$u_1=-1+u_3$$, while $$u_2$$ and $$u_3$$ are arbitrary. Choosing $$u_2=u_3=0$$ yields

${\bf u}=\threecol{-1}00.\nonumber$

Therefore Equation \ref{eq:10.5.14} implies that

${\bf y}_3={\bf u}e^t+{\bf x}_3te^t=\threecol{-1}00e^t+\threecol111te^t\nonumber$

is a solution of Equation \ref{eq:10.5.15}. Since $${\bf y}_1$$, $${\bf y}_2$$, and $${\bf y}_3$$ are linearly independent by Theorem [thmtype:10.5.3}, they form a fundamental set of solutions for Equation \ref{eq:10.5.15}. Therefore the general solution of Equation \ref{eq:10.5.15} is

${\bf y}=c_1\threecol101e^t+c_2\threecol010e^t +c_3\left(\threecol{-1}00e^t+\threecol111te^t\right).\bbox\nonumber$

## Geometric Properties of Solutions when $$n=2$$

We’ll now consider the geometric properties of solutions of a $$2\times2$$ constant coefficient system

$\label{eq:10.5.19} \twocol{y_1'}{y_2'}=\left[\begin{array}{cc}a_{11}&a_{12}\\a_{21}&a_{22} \end{array}\right]\twocol{y_1}{y_2}$

under the assumptions of this section; that is, when the matrix

$A=\left[\begin{array}{cc}a_{11}&a_{12}\\a_{21}&a_{22} \end{array}\right]\nonumber$

has a repeated eigenvalue $$\lambda_1$$ and the associated eigenspace is one-dimensional. In this case we know from Theorem $$\PageIndex{1}$$ that the general solution of Equation \ref{eq:10.5.19} is

$\label{eq:10.5.20} {\bf y}=c_1{\bf x}e^{\lambda_1t}+c_2({\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t}),$

where $${\bf x}$$ is an eigenvector of $$A$$ and $${\bf u}$$ is any one of the infinitely many solutions of

$\label{eq:10.5.21} (A-\lambda_1I){\bf u}={\bf x}.$

We assume that $$\lambda_1\ne0$$. Figure $$\PageIndex{1}$$: Positive and negative half-planes.

Let $$L$$ denote the line through the origin parallel to $${\bf x}$$. By a half-line of $$L$$ we mean either of the rays obtained by removing the origin from $$L$$. Equation \ref{eq:10.5.20} is a parametric equation of the half-line of $$L$$ in the direction of $${\bf x}$$ if $$c_1>0$$, or of the half-line of $$L$$ in the direction of $$-{\bf x}$$ if $$c_1<0$$. The origin is the trajectory of the trivial solution $${\bf y}\equiv{\bf 0}$$.

Henceforth, we assume that $$c_2\ne0$$. In this case, the trajectory of Equation \ref{eq:10.5.20} can’t intersect $$L$$, since every point of $$L$$ is on a trajectory obtained by setting $$c_2=0$$. Therefore the trajectory of Equation \ref{eq:10.5.20} must lie entirely in one of the open half-planes bounded by $$L$$, but does not contain any point on $$L$$. Since the initial point $$(y_1(0),y_2(0))$$ defined by $${\bf y}(0)=c_1{\bf x}_1+c_2{\bf u}$$ is on the trajectory, we can determine which half-plane contains the trajectory from the sign of $$c_2$$, as shown in Figure . For convenience we’ll call the half-plane where $$c_2>0$$ the positive half-plane. Similarly, the-half plane where $$c_2<0$$ is the negative half-plane. You should convince yourself (Exercise 10.5.35) that even though there are infinitely many vectors $${\bf u}$$ that satisfy Equation \ref{eq:10.5.21}, they all define the same positive and negative half-planes. In the figures simply regard $${\bf u}$$ as an arrow pointing to the positive half-plane, since wen’t attempted to give $${\bf u}$$ its proper length or direction in comparison with $${\bf x}$$. For our purposes here, only the relative orientation of $${\bf x}$$ and $${\bf u}$$ is important; that is, whether the positive half-plane is to the right of an observer facing the direction of $${\bf x}$$ (as in Figures $$\PageIndex{2}$$ and $$\PageIndex{5}$$), or to the left of the observer (as in Figures $$\PageIndex{3}$$ and $$\PageIndex{4}$$).

Multiplying Equation \ref{eq:10.5.20} by $$e^{-\lambda_1t}$$ yields

$e^{-\lambda_1t}{\bf y}(t)=c_1{\bf x}+c_2{\bf u}+c_2t {\bf x}.\nonumber$

Since the last term on the right is dominant when $$|t|$$ is large, this provides the following information on the direction of $${\bf y}(t)$$:

1. Along trajectories in the positive half-plane ($$c_2>0$$), the direction of $${\bf y}(t)$$ approaches the direction of $${\bf x}$$ as $$t\to\infty$$ and the direction of $$-{\bf x}$$ as $$t\to-\infty$$.
2. Along trajectories in the negative half-plane ($$c_2<0$$), the direction of $${\bf y}(t)$$ approaches the direction of $$-{\bf x}$$ as $$t\to\infty$$ and the direction of $${\bf x}$$ as $$t\to-\infty$$.

Since $\lim_{t\to\infty}\|{\bf y}(t)\|=\infty\quad \text{and} \quad \lim_{t\to-\infty}{\bf y}(t)={\bf 0}\quad \text{if} \quad \lambda_1>0,\nonumber$

or

$\lim_{t-\to\infty}\|{\bf y}(t)\|=\infty \quad \text{and} \quad \lim_{t\to\infty}{\bf y}(t)={\bf 0} \quad \text{if} \quad \lambda_1<0,\nonumber$ there are four possible patterns for the trajectories of Equation \ref{eq:10.5.19}, depending upon the signs of $$c_2$$ and $$\lambda_1$$. Figures $$\PageIndex{2}$$ - $$\PageIndex{5}$$ illustrate these patterns, and reveal the following principle:

If $$\lambda_1$$ and $$c_2$$ have the same sign then the direction of the traectory approaches the direction of $$-{\bf x}$$ as $$\|{\bf y} \|\to0$$ and the direction of $${\bf x}$$ as $$\|{\bf y}\|\to\infty$$. If $$\lambda_1$$ and $$c_2$$ have opposite signs then the direction of the trajectory approaches the direction of $${\bf x}$$ as $$\|{\bf y} \|\to0$$ and the direction of $$-{\bf x}$$ as $$\|{\bf y}\|\to\infty$$. Figure $$\PageIndex{2}$$: Positive eigenvalue; motion away from the origin. Figure $$\PageIndex{3}$$: Positive eigenvalue; motion away from the origin. Figure $$\PageIndex{4}$$: Negative eigenvalue; motion toward the origin. Figure $$\PageIndex{5}$$: Negative eigenvalue; motion toward the origin.