10.5: Constant Coefficient Homogeneous Systems II
- Page ID
- 30795
( \newcommand{\kernel}{\mathrm{null}\,}\)
We saw in Section 10.4 that if an n×n constant matrix A has n real eigenvalues λ1, λ2, …, λn (which need not be distinct) with associated linearly independent eigenvectors x1, x2, …, xn, then the general solution of y′=Ay is
y=c1x1eλ1t+c2x2eλ2t+⋯+cnxneλnt.
In this section we consider the case where A has n real eigenvalues, but does not have n linearly independent eigenvectors. It is shown in linear algebra that this occurs if and only if A has at least one eigenvalue of multiplicity r>1 such that the associated eigenspace has dimension less than r. In this case A is said to be defective. Since it is beyond the scope of this book to give a complete analysis of systems with defective coefficient matrices, we will restrict our attention to some commonly occurring special cases.
Example 10.5.1
Show that the system
y′=[11−254−9]y
does not have a fundamental set of solutions of the form {x1eλ1t,x2eλ2t}, where λ1 and λ2 are eigenvalues of the coefficient matrix A of Equation ??? and x1, and x2 are associated linearly independent eigenvectors.
Solution
The characteristic polynomial of A is
|11−λ4−λ|=(λ−11)(λ+9)+100=λ2−2λ+1=(λ−1)2.
Hence, λ=1 is the only eigenvalue of A. The augmented matrix of the system (A−I)x=0 is
[10−25⋮04−10⋮0],
which is row equivalent to
[1−52⋮000⋮0]
Hence, x1=5x2/2 where x2 is arbitrary. Therefore all eigenvectors of A are scalar multiples of x1=[52], so A does not have a set of two linearly independent eigenvectors.
From Example 10.5.1 , we know that all scalar multiples of y1=[52]et are solutions of Equation ???; however, to find the general solution we must find a second solution y2 such that {y1,y2} is linearly independent. Based on your recollection of the procedure for solving a constant coefficient scalar equation
ay″+by′+cy=0
in the case where the characteristic polynomial has a repeated root, you might expect to obtain a second solution of Equation ??? by multiplying the first solution by t. However, this yields y2=[52]tet, which does not work, since
y′2=[52](tet+et),while[11−254−9]y2=[52]tet.
The next theorem shows what to do in this situation.
Theorem 10.5.1
Suppose the n×n matrix A has an eigenvalue λ1 of multiplicity ≥2 and the associated eigenspace has dimension 1; that is, all λ1-eigenvectors of A are scalar multiples of an eigenvector x. Then there are infinitely many vectors u such that
(A−λ1I)u=x.
Moreover, if u is any such vector then
y1=xeλ1tand y2=ueλ1t+xteλ1t
are linearly independent solutions of y′=Ay.
A complete proof of this theorem is beyond the scope of this book. The difficulty is in proving that there’s a vector u satisfying Equation ???, since det(A−λ1I)=0. We’ll take this without proof and verify the other assertions of the theorem. We already know that y1 in Equation ??? is a solution of y′=Ay. To see that y2 is also a solution, we compute
y′2−Ay2=λ1ueλ1t+xeλ1t+λ1xteλ1t−Aueλ1t−Axteλ1t=(λ1u+x−Au)eλ1t+(λ1x−Ax)teλ1t.
Since Ax=λ1x, this can be written as
y′2−Ay2=−((A−λ1I)u−x)eλ1t,
and now Equation ??? implies that y′2=Ay2. To see that y1 and y2 are linearly independent, suppose c1 and c2 are constants such that
c1y1+c2y2=c1xeλ1t+c2(ueλ1t+xteλ1t)=0.
We must show that c1=c2=0. Multiplying Equation ??? by e−λ1t shows that
c1x+c2(u+xt)=0.
By differentiating this with respect to t, we see that c2x=0, which implies c2=0, because x≠0. Substituting c_2=0 into Equation \ref{eq:10.5.5} yields c_1{\bf x}={\bf 0}, which implies that c_1=0, again because {\bf x}\ne{\bf 0}
Example 10.5.2
Use Theorem 10.5.1 to find the general solution of the system
\label{eq:10.5.6} {\bf y}'=\left[\begin{array}{cc}{11}&{-25}\\[4pt]{4}&{-9}\end{array} \right]{\bf y}
considered in Example 10.5.1 .
Solution
In Example 10.5.1 we saw that \lambda_1=1 is an eigenvalue of multiplicity 2 of the coefficient matrix A in Equation \ref{eq:10.5.6}, and that all of the eigenvectors of A are multiples of
{\bf x}=\twocol52.\nonumber
Therefore
{\bf y}_1=\twocol52e^t\nonumber
is a solution of Equation \ref{eq:10.5.6}. From Theorem 10.5.1 , a second solution is given by {\bf y}_2={\bf u}e^t+{\bf x}te^t, where (A-I){\bf u}={\bf x}. The augmented matrix of this system is
\left[\begin{array}{rrcr}10&-25&\vdots&5\\[4pt]4&-10&\vdots&2\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrcr}1&-\frac{5}{2}&\vdots&\frac{1}{2}\\[4pt]0&0&\vdots&0\end{array}\right],\nonumber
Therefore the components of {\bf u} must satisfy
u_1-{5\over2}u_2={1\over2},\nonumber
where u_2 is arbitrary. We choose u_2=0, so that u_1=1/2 and
{\bf u}=\twocol{1\over2}0.\nonumber
Thus,
{\bf y}_2=\twocol10{e^t\over2}+\twocol52te^t.\nonumber
Since {\bf y}_1 and {\bf y}_2 are linearly independent by Theorem 10.5.1 , they form a fundamental set of solutions of Equation \ref{eq:10.5.6}. Therefore the general solution of Equation \ref{eq:10.5.6} is
{\bf y}=c_1\twocol52e^t+c_2\left(\twocol10{e^t\over2}+\twocol52te^t\right).\nonumber
Note that choosing the arbitrary constant u_2 to be nonzero is equivalent to adding a scalar multiple of {\bf y}_1 to the second solution {\bf y}_2 (Exercise 10.5.33).
Example 10.5.3
Find the general solution of
\label{eq:10.5.7} {\bf y}'=\left[\begin{array}{ccc}{3}&{4}&{-10}\\[4pt]{2}&{1}&{-2}\\[4pt]{2}&{2}&{-5}\end{array} \right] {\bf y}.
Solution
The characteristic polynomial of the coefficient matrix A in Equation \ref{eq:10.5.7} is
\left|\begin{array}{ccc} 3-\lambda & 4 & -10\\[4pt] 2 & 1-\lambda & -2\\[4pt] 2 & 2 &-5-\lambda\end{array}\right| =- (\lambda-1)(\lambda+1)^2.\nonumber
Hence, the eigenvalues are \lambda_1=1 with multiplicity 1 and \lambda_2=-1 with multiplicity 2. Eigenvectors associated with \lambda_1=1 must satisfy (A-I){\bf x}={\bf 0}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} 2 & 4 & -10 &\vdots & 0\\[4pt] 2& 0 & -2 &\vdots & 0\\[4pt] 2 & 2 & -6 & \vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 0 & -1 &\vdots& 0\\[4pt] 0 & 1 & -2 &\vdots& 0\\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Hence, x_1 =x_3 and x_2 =2 x_3, where x_3 is arbitrary. Choosing x_3=1 yields the eigenvector
{\bf x}_1=\threecol121.\nonumber
Therefore
{\bf y}_1 =\threecol121e^t\nonumber
is a solution of Equation \ref{eq:10.5.7}. Eigenvectors associated with \lambda_2 =-1 satisfy (A+I){\bf x}={\bf 0}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} 4 & 4 & -10 &\vdots & 0\\[4pt] 2 & 2 & -2 & \vdots & 0\\[4pt]2 & 2 & -4 &\vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 1 & 0 &\vdots& 0\\[4pt] 0 & 0 & 1 &\vdots& 0 \\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Hence, x_3=0 and x_1 =-x_2, where x_2 is arbitrary. Choosing x_2=1 yields the eigenvector
{\bf x}_2=\threecol{-1}10,\nonumber
so
{\bf y}_2 =\threecol{-1}10e^{-t}\nonumber
is a solution of Equation \ref{eq:10.5.7}. Since all the eigenvectors of A associated with \lambda_2=-1 are multiples of {\bf x}_2, we must now use Theorem 10.5.1 to find a third solution of Equation \ref{eq:10.5.7} in the form
\label{eq:10.5.8} {\bf y}_3={\bf u}e^{-t}+\threecol{-1}10te^{-t},
where {\bf u} is a solution of (A+I){\bf u=x}_2. The augmented matrix of this system is
\left[\begin{array}{rrrcr} 4 & 4 & -10 &\vdots & -1\\[4pt] 2 & 2 & -2 & \vdots & 1\\[4pt] 2 & 2 & -4 &\vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 1 & 0 &\vdots& 1\\[4pt] 0 & 0 & 1 &\vdots& {1\over2} \\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Hence, u_3=1/2 and u_1 =1-u_2, where u_2 is arbitrary. Choosing u_2=0 yields
{\bf u} =\threecol10{1\over2},\nonumber
and substituting this into Equation \ref{eq:10.5.8} yields the solution
{\bf y}_3=\threecol201{e^{-t}\over2}+\threecol{-1}10te^{-t}\nonumber
of Equation \ref{eq:10.5.7}. Since the Wronskian of \{{\bf y}_1,{\bf y}_2,{\bf y}_3\} at t=0 is
\left|\begin{array}{rrr} 1&-1&1\\[4pt]2&1&0\\[4pt]1&0&1\over2\end{array}\right|={1\over2},\nonumber
\{{\bf y}_1,{\bf y}_2,{\bf y}_3\} is a fundamental set of solutions of Equation \ref{eq:10.5.7}. Therefore the general solution of Equation \ref{eq:10.5.7} is
{\bf y}=c_1\threecol121e^t+c_2\threecol{-1}10e^{-t}+c_3\left (\threecol201{e^{-t}\over2}+\threecol{-1}10te^{-t}\right).\nonumber
Theorem 10.5.2
Suppose the n\times n matrix A has an eigenvalue \lambda_1 of multiplicity \ge 3 and the associated eigenspace is one–dimensional; that is, all eigenvectors associated with \lambda_1 are scalar multiples of the eigenvector {\bf x}. Then there are infinitely many vectors {\bf u} such that
\label{eq:10.5.9} (A-\lambda_1I){\bf u}={\bf x},
and, if {\bf u} is any such vector, there are infinitely many vectors {\bf v} such that
\label{eq:10.5.10} (A-\lambda_1I){\bf v}={\bf u}.
If {\bf u} satisfies Equation \ref{eq:10.5.9} and {\bf v} satisfies Equation \ref{eq:10.5.10}, then
\begin{aligned} {\bf y}_1 &={\bf x} e^{\lambda_1t},\\[4pt] {\bf y}_2&={\bf u}e^{\lambda_1t}+{\bf x} te^{\lambda_1t},\mbox{ and }\\[4pt] {\bf y}_3&={\bf v}e^{\lambda_1t}+{\bf u}te^{\lambda_1t}+{\bf x} {t^2e^{\lambda_1t}\over2}\end{aligned}\nonumber
are linearly independent solutions of {\bf y}'=A{\bf y}.
Again, it is beyond the scope of this book to prove that there are vectors {\bf u} and {\bf v} that satisfy Equation \ref{eq:10.5.9} and Equation \ref{eq:10.5.10}. Theorem 10.5.1 implies that {\bf y}_1 and {\bf y}_2 are solutions of {\bf y}'=A{\bf y}. We leave the rest of the proof to you (Exercise 10.5.34).
Example 10.5.4
Use Theorem 10.5.2 to find the general solution of
\label{eq:10.5.11} {\bf y}'=\left[\begin{array}{ccc}1&1&1 \\[4pt] 1&3&-1 \\[4pt] 0&2&2 \end{array} \right] {\bf y}.
Solution
The characteristic polynomial of the coefficient matrix A in Equation \ref{eq:10.5.11} is
\left|\begin{array}{ccc} 1-\lambda & 1 & \phantom{-}1\\[4pt] 1 & 3-\lambda & -1\\[4pt] 0 & 2 & 2-\lambda\end{array}\right| =-(\lambda-2)^3.\nonumber
Hence, \lambda_1=2 is an eigenvalue of multiplicity 3. The associated eigenvectors satisfy (A-2I){\bf x=0}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots & 0\\[4pt] 1& 1 & -1 &\vdots & 0\\[4pt] 0 & 2 & 0 & \vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& 0\\[4pt] 0 & 1 & 0 &\vdots& 0 \\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Hence, x_1 =x_3 and x_2 = 0, so the eigenvectors are all scalar multiples of
{\bf x}_1=\threecol101.\nonumber
Therefore
{\bf y}_1=\threecol101e^{2t}\nonumber
is a solution of Equation \ref{eq:10.5.11}. We now find a second solution of Equation \ref{eq:10.5.11} in the form
{\bf y}_2={\bf u}e^{2t}+\threecol101te^{2t},\nonumber
where {\bf u} satisfies (A-2I){\bf u=x}_1. The augmented matrix of this system is
\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots & 1\\[4pt] 1& 1 & -1 &\vdots & 0\\[4pt] 0 & 2 & 0 & \vdots & 1\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -{1\over2}\\[4pt] 0 & 1 & 0 &\vdots& {1\over2}\\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Letting u_3=0 yields u_1=-1/2 and u_2=1/2; hence,
{\bf u}={1\over2}\threecol{-1}10\nonumber
and
{\bf y}_2=\threecol{-1}10{e^{2t}\over2}+\threecol101te^{2t}\nonumber
is a solution of Equation \ref{eq:10.5.11}. We now find a third solution of Equation \ref{eq:10.5.11} in the form
{\bf y}_3={\bf v}e^{2t}+\threecol{-1}10{te^{2t}\over2}+\threecol101{t^2e^{2t}\over2}\nonumber
where {\bf v} satisfies (A-2I){\bf v}={\bf u}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} -1 & 1 & 1 &\vdots &-{1\over2}\\[4pt] 1& 1 & -1 &\vdots & {1\over2}\\[4pt] 0 & 2 & 0 & \vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& {1\over2}\\[4pt] 0 & 1 & 0 &\vdots& 0\\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Letting v_3=0 yields v_1=1/2 and v_2=0; hence,
{\bf v}={1\over2}\threecol100.\nonumber
Therefore
{\bf y}_3=\threecol100{e^{2t}\over2}+ \threecol{-1}10{te^{2t}\over2}+\threecol101{t^2e^{2t}\over2}\nonumber
is a solution of Equation \ref{eq:10.5.11}. Since {\bf y}_1, {\bf y}_2, and {\bf y}_3 are linearly independent by Theorem 10.5.2 , they form a fundamental set of solutions of Equation \ref{eq:10.5.11}. Therefore the general solution of Equation \ref{eq:10.5.11} is
\begin{aligned} {\bf y} = c_{1}\left[\begin{array}{c}{1}\\[4pt]{0}\\[4pt]{1}\end{array} \right]e^{2t}+c_{2}\left(\left[ \begin{array}{c}{-1}\\[4pt]{1}\\[4pt]{0}\end{array} \right]\frac{e^{2t}}{2}+\left[\begin{array}{c}{1}\\[4pt]{0}\\[4pt]{1}\end{array} \right] te^{2t} \right) + c_{3}\left(\left[\begin{array}{c}{1}\\[4pt]{0}\\[4pt]{0}\end{array} \right]\frac{e^{2t}}{2}+\left[\begin{array}{c}{-1}\\[4pt]{1}\\[4pt]{0}\end{array} \right]\frac{te^{2t}}{2}+\left[\begin{array}{c}{1}\\[4pt]{0}\\[4pt]{1}\end{array} \right]\frac{t^{2}e^{2t}}{2} \right) \end{aligned}\nonumber
Theorem 10.5.3
Suppose the n\times n matrix A has an eigenvalue \lambda_1 of multiplicity \ge 3 and the associated eigenspace is two–dimensional; that is, all eigenvectors of A associated with \lambda_1 are linear combinations of two linearly independent eigenvectors {\bf x}_1 and {\bf x}_2. Then there are constants \alpha and \beta (not both zero) such that if
\label{eq:10.5.12} {\bf x}_3=\alpha{\bf x}_1+\beta{\bf x}_2,
then there are infinitely many vectors {\bf u} such that
\label{eq:10.5.13} (A-\lambda_1I){\bf u}={\bf x}_3.
If {\bf u} satisfies Equation \ref{eq:10.5.13}, then
\label{eq:10.5.14} \begin{array}{ll} {\mathbf{y}_{1}}&{=\mathbf{x}_{1}e^{\lambda _{1}t}}\\[4pt] {\mathbf{y}_{2}}&{=\mathbf{x}_{2}e^{\lambda _{1}t},\ \text{and}}\\[4pt] {\mathbf{y}_{3}}&{=\mathbf{u}e^{\lambda _{1}t}+\mathbf{x}_{3}te^{\lambda _{1}t},} \end{array}
are linearly independent solutions of {\bf y}'=A{\bf y}.
We omit the proof of this theorem.
Example 10.5.5
Use Theorem 10.5.3 to find the general solution of
\label{eq:10.5.15} {\bf y}'=\left[\begin{array}{ccc}{0}&{0}&{1}\\[4pt]{-1}&{1}&{1}\\[4pt]{-1}&{0}&{2}\end{array} \right]{\bf y}.
Solution
The characteristic polynomial of the coefficient matrix A in Equation \ref{eq:10.5.15} is
\left|\begin{array}{ccc} -\lambda & 0 & 1\\[4pt] -1 & 1-\lambda & 1\\[4pt] -1 & 0 & 2-\lambda\end{array}\right| =-(\lambda-1)^3.\nonumber
Hence, \lambda_1=1 is an eigenvalue of multiplicity 3. The associated eigenvectors satisfy (A-I){\bf x=0}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} -1 & 0 & 1 &\vdots & 0\\[4pt] -1& 0 & 1 &\vdots & 0\\[4pt] -1 & 0 & 1 & \vdots & 0\end{array}\right],\nonumber
which is row equivalent to
\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& 0\\[4pt] 0 & 0 & 0 &\vdots& 0 \\[4pt] 0 & 0 & 0 &\vdots&0\end{array}\right].\nonumber
Hence, x_1 =x_3 and x_2 is arbitrary, so the eigenvectors are of the form
{\bf x}_1=\threecol{x_3}{x_2}{x_3}=x_3\threecol101+x_2\threecol010.\nonumber
Therefore the vectors
\label{eq:10.5.16} {\bf x}_1 =\threecol101\quad\mbox{and }\quad {\bf x}_2=\threecol010
form a basis for the eigenspace, and
{\bf y}_1 =\threecol101e^t \quad \text{and} \quad {\bf y}_2=\threecol010e^t\nonumber
are linearly independent solutions of Equation \ref{eq:10.5.15}. To find a third linearly independent solution of Equation \ref{eq:10.5.15}, we must find constants \alpha and \beta (not both zero) such that the system
\label{eq:10.5.17} (A-I){\bf u}=\alpha{\bf x}_1+\beta{\bf x}_2
has a solution {\bf u}. The augmented matrix of this system is
\left[\begin{array}{rrrcr} -1 & 0 & 1 &\vdots &\alpha\\[4pt] -1& 0 & 1 &\vdots &\beta\\[4pt] -1 & 0 & 1 & \vdots &\alpha\end{array}\right],\nonumber
which is row equivalent to
\label{eq:10.5.18} \left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -\alpha\\[4pt] 0 & 0 & 0 &\vdots&\beta-\alpha\\[4pt] 0 & 0 & 0 &\vdots&0\end{array} \right].
Therefore Equation \ref{eq:10.5.17} has a solution if and only if \beta=\alpha, where \alpha is arbitrary. If \alpha=\beta=1 then Equation \ref{eq:10.5.12} and Equation \ref{eq:10.5.16} yield
{\bf x}_3={\bf x}_1+{\bf x}_2= \threecol101+\threecol010=\threecol111,\nonumber
and the augmented matrix Equation \ref{eq:10.5.18} becomes
\left[\begin{array}{rrrcr} 1 & 0 &- 1 &\vdots& -1\\[4pt] 0 & 0 & 0 &\vdots& 0\\[4pt] 0 & 0 & 0 &\vdots&0\end{array} \right].\nonumber
This implies that u_1=-1+u_3, while u_2 and u_3 are arbitrary. Choosing u_2=u_3=0 yields
{\bf u}=\threecol{-1}00.\nonumber
Therefore Equation \ref{eq:10.5.14} implies that
{\bf y}_3={\bf u}e^t+{\bf x}_3te^t=\threecol{-1}00e^t+\threecol111te^t\nonumber
is a solution of Equation \ref{eq:10.5.15}. Since {\bf y}_1, {\bf y}_2, and {\bf y}_3 are linearly independent by Theorem 10.5.3 , they form a fundamental set of solutions for Equation \ref{eq:10.5.15}. Therefore the general solution of Equation \ref{eq:10.5.15} is
{\bf y}=c_1\threecol101e^t+c_2\threecol010e^t +c_3\left(\threecol{-1}00e^t+\threecol111te^t\right).\bbox\nonumber
Geometric Properties of Solutions when n=2
We’ll now consider the geometric properties of solutions of a 2\times2 constant coefficient system
\label{eq:10.5.19} \twocol{y_1'}{y_2'}=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right]\twocol{y_1}{y_2}
under the assumptions of this section; that is, when the matrix
A=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right]\nonumber
has a repeated eigenvalue \lambda_1 and the associated eigenspace is one-dimensional. In this case we know from Theorem 10.5.1 that the general solution of Equation \ref{eq:10.5.19} is
\label{eq:10.5.20} {\bf y}=c_1{\bf x}e^{\lambda_1t}+c_2({\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t}),
where {\bf x} is an eigenvector of A and {\bf u} is any one of the infinitely many solutions of
\label{eq:10.5.21} (A-\lambda_1I){\bf u}={\bf x}.
We assume that \lambda_1\ne 0.
Let L denote the line through the origin parallel to {\bf x}. By a half-line of L we mean either of the rays obtained by removing the origin from L. Equation \ref{eq:10.5.20} is a parametric equation of the half-line of L in the direction of {\bf x} if c_1>0, or of the half-line of L in the direction of -{\bf x} if c_1<0. The origin is the trajectory of the trivial solution {\bf y}\equiv{\bf 0}.
Henceforth, we assume that c_2\ne0. In this case, the trajectory of Equation \ref{eq:10.5.20} can’t intersect L, since every point of L is on a trajectory obtained by setting c_2=0. Therefore the trajectory of Equation \ref{eq:10.5.20} must lie entirely in one of the open half-planes bounded by L, but does not contain any point on L. Since the initial point (y_1(0),y_2(0)) defined by {\bf y}(0)=c_1{\bf x}_1+c_2{\bf u} is on the trajectory, we can determine which half-plane contains the trajectory from the sign of c_2, as shown in Figure . For convenience we’ll call the half-plane where c_2>0 the positive half-plane. Similarly, the-half plane where c_2<0 is the negative half-plane. You should convince yourself that even though there are infinitely many vectors {\bf u} that satisfy Equation \ref{eq:10.5.21}, they all define the same positive and negative half-planes. In the figures simply regard {\bf u} as an arrow pointing to the positive half-plane, since wen’t attempted to give {\bf u} its proper length or direction in comparison with {\bf x}. For our purposes here, only the relative orientation of {\bf x} and {\bf u} is important; that is, whether the positive half-plane is to the right of an observer facing the direction of {\bf x} (as in Figures 10.5.2 and 10.5.5 ), or to the left of the observer (as in Figures 10.5.3 and 10.5.4 ).
Multiplying Equation \ref{eq:10.5.20} by e^{-\lambda_1t} yields
e^{-\lambda_1t}{\bf y}(t)=c_1{\bf x}+c_2{\bf u}+c_2t {\bf x}.\nonumber
Since the last term on the right is dominant when |t| is large, this provides the following information on the direction of {\bf y}(t):
- Along trajectories in the positive half-plane (c_2>0), the direction of {\bf y}(t) approaches the direction of {\bf x} as t\to\infty and the direction of -{\bf x} as t\to-\infty.
- Along trajectories in the negative half-plane (c_2<0), the direction of {\bf y}(t) approaches the direction of -{\bf x} as t\to\infty and the direction of {\bf x} as t\to-\infty.
Since \lim_{t\to\infty}\|{\bf y}(t)\|=\infty\quad \text{and} \quad \lim_{t\to-\infty}{\bf y}(t)={\bf 0}\quad \text{if} \quad \lambda_1>0,\nonumber
or
\lim_{t-\to\infty}\|{\bf y}(t)\|=\infty \quad \text{and} \quad \lim_{t\to\infty}{\bf y}(t)={\bf 0} \quad \text{if} \quad \lambda_1<0,\nonumber there are four possible patterns for the trajectories of Equation \ref{eq:10.5.19}, depending upon the signs of c_2 and \lambda_1. Figures 10.5.2 - 10.5.5 illustrate these patterns, and reveal the following principle:
If \lambda_1 and c_2 have the same sign then the direction of the traectory approaches the direction of -{\bf x} as \|{\bf y} \|\to0 and the direction of {\bf x} as \|{\bf y}\|\to\infty. If \lambda_1 and c_2 have opposite signs then the direction of the trajectory approaches the direction of {\bf x} as \|{\bf y} \|\to0 and the direction of -{\bf x} as \|{\bf y}\|\to\infty.