Skip to main content
Mathematics LibreTexts

10.6: Constant Coefficient Homogeneous Systems III

  • Page ID
    30797
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    \( \newcommand{\place}{\bigskip\hrule\bigskip\noindent} \newcommand{\threecol}[3]{\left[\begin{array}{r}#1\\#2\\#3\end{array}\right]} \newcommand{\threecolj}[3]{\left[\begin{array}{r}#1\\[1\jot]#2\\[1\jot]#3\end{array}\right]} \newcommand{\lims}[2]{\,\bigg|_{#1}^{#2}} \newcommand{\twocol}[2]{\left[\begin{array}{l}#1\\#2\end{array}\right]} \newcommand{\ctwocol}[2]{\left[\begin{array}{c}#1\\#2\end{array}\right]} \newcommand{\cthreecol}[3]{\left[\begin{array}{c}#1\\#2\\#3\end{array}\right]} \newcommand{\eqline}[1]{\centerline{\hfill$\displaystyle#1$\hfill}} \newcommand{\twochar}[4]{\left|\begin{array}{cc} #1-\lambda\\#3-\lambda\end{array}\right|} \newcommand{\twobytwo}[4]{\left[\begin{array}{rr} #1\\#3\end{array}\right]} \newcommand{\threechar}[9]{\left[\begin{array}{ccc} #1-\lambda\\#4-\lambda\\#7 -\lambda\end{array}\right]} \newcommand{\threebythree}[9]{\left[\begin{array}{rrr} #1\\#4\\#7 \end{array}\right]} \newcommand{\solutionpart}[1]{\vskip10pt\noindent\underbar{\color{blue}\sc Solution({\bf #1})\ }} \newcommand{\Cex}{\fbox{\textcolor{red}{C}}\, } \newcommand{\CGex}{\fbox{\textcolor{red}{C/G}}\, } \newcommand{\Lex}{\fbox{\textcolor{red}{L}}\, } \newcommand{\matfunc}[3]{\left[\begin{array}{cccc}#1_{11}(t)_{12}(t)&\cdots _{1#3}(t)\\#1_{21}(t)_{22}(t)&\cdots_{2#3}(t)\\\vdots& \vdots&\ddots&\vdots\\#1_{#21}(t)_{#22}(t)&\cdots_{#2#3}(t) \end{array}\right]} \newcommand{\col}[2]{\left[\begin{array}{c}#1_1\\#1_2\\\vdots\\#1_#2\end{array}\right]} \newcommand{\colfunc}[2]{\left[\begin{array}{c}#1_1(t)\\#1_2(t)\\\vdots\\#1_#2(t)\end{array}\right]} \newcommand{\cthreebythree}[9]{\left[\begin{array}{ccc} #1\\#4\\#7 \end{array}\right]} 1 \ newcommand {\ dy} {\ ,\ mathrm {d}y} \ newcommand {\ dx} {\ ,\ mathrm {d}x} \ newcommand {\ dyx} {\ ,\ frac {\ mathrm {d}y}{\ mathrm {d}x}} \ newcommand {\ ds} {\ ,\ mathrm {d}s} \ newcommand {\ dt }{\ ,\ mathrm {d}t} \ newcommand {\dst} {\ ,\ frac {\ mathrm {d}s}{\ mathrm {d}t}} \)

    We now consider the system \({\bf y}'=A{\bf y}\), where \(A\) has a complex eigenvalue \(\lambda=\alpha+i\beta\) with \(\beta\ne0\). We continue to assume that \(A\) has real entries, so the characteristic polynomial of \(A\) has real coefficients. This implies that \(\overline\lambda=\alpha-i\beta\) is also an eigenvalue of \(A\).

    An eigenvector \({\bf x}\) of \(A\) associated with \(\lambda=\alpha+i\beta\) will have complex entries, so we’ll write

    \[{\bf x}={\bf u}+i{\bf v} \nonumber \]

    where \({\bf u}\) and \({\bf v}\) have real entries; that is, \({\bf u}\) and \({\bf v}\) are the real and imaginary parts of \({\bf x}\). Since \(A{\bf x}=\lambda {\bf x}\),

    \[\label{eq:10.6.1} A({\bf u}+i{\bf v})=(\alpha+i\beta)({\bf u}+i{\bf v}). \]

    Taking complex conjugates here and recalling that \(A\) has real entries yields

    \[A({\bf u}-i{\bf v})=(\alpha-i\beta)({\bf u}-i{\bf v}), \nonumber \]

    which shows that \({\bf x}={\bf u}-i{\bf v}\) is an eigenvector associated with \(\overline\lambda=\alpha-i\beta\). The complex conjugate eigenvalues \(\lambda\) and \(\overline\lambda\) can be separately associated with linearly independent solutions \({\bf y}'=A{\bf y}\); however, we will not pursue this approach, since solutions obtained in this way turn out to be complex–valued. Instead, we’ll obtain solutions of \({\bf y}'=A{\bf y}\) in the form

    \[\label{eq:10.6.2} {\bf y}=f_1{\bf u}+f_2{\bf v} \]

    where \(f_1\) and \(f_2\) are real–valued scalar functions. The next theorem shows how to do this.

    Theorem 10.6.1

    Let \(A\) be an \(n\times n\) matrix with real entries\(.\) Let \(\lambda=\alpha+i\beta\) (\(\beta\ne0\)) be a complex eigenvalue of \(A\) and let \({\bf x}={\bf u}+i{\bf v}\) be an associated eigenvector\(,\) where \({\bf u}\) and \({\bf v}\) have real components\(.\) Then \({\bf u}\) and \({\bf v}\) are both nonzero and

    \[{\bf y}_1=e^{\alpha t}({\bf u}\cos\beta t-{\bf v}\sin\beta t) \quad \text{and} \quad {\bf y}_2=e^{\alpha t}({\bf u}\sin\beta t+{\bf v}\cos\beta t), \nonumber \]

    which are the real and imaginary parts of

    \[\label{eq:10.6.3} e^{\alpha t}(\cos\beta t+i\sin\beta t)({\bf u}+i{\bf v}), \]

    are linearly independent solutions of \({\bf y}'=A{\bf y}\).

    Proof

    A function of the form Equation \ref{eq:10.6.2} is a solution of \({\bf y}'=A{\bf y}\) if and only if

    \[\label{eq:10.6.4} f_1'{\bf u}+f_2'{\bf v}=f_1A{\bf u}+f_2A{\bf v}. \]

    Carrying out the multiplication indicated on the right side of Equation \ref{eq:10.6.1} and collecting the real and imaginary parts of the result yields

    \[A({\bf u}+i{\bf v})=(\alpha{\bf u}-\beta{\bf v})+i(\alpha{\bf v}+\beta{\bf u}). \nonumber \]

    Equating real and imaginary parts on the two sides of this equation yields

    \[\begin{array}{rcl} A{\bf u}&=&\alpha{\bf u}-\beta{\bf v}\\[4pt] A{\bf v}&=&\alpha{\bf v}+\beta{\bf u}. \end{array}\nonumber \]

    We leave it to you (Exercise 10.6.25) to show from this that \({\bf u}\) and \({\bf v}\) are both nonzero. Substituting from these equations into Equation \ref{eq:10.6.4} yields

    \[\begin{aligned} f_1'{\bf u}+f_2'{\bf v} &=f_1(\alpha{\bf u}-\beta{\bf v})+f_2(\alpha{\bf v}+\beta{\bf u})\\[4pt] &=(\alpha f_1+\beta f_2){\bf u}+(-\beta f_1+\alpha f_2){\bf v}.\end{aligned}\nonumber \]

    This is true if

    \[\begin{array}{rcr} f_1'&=&\alpha f_1+\beta f_2\phantom{,}\\[4pt] f_2'&=&-\beta f_1+\alpha f_2, \end{array} \quad \text{or equivalently} \quad \begin{array}{rcr} f_1'-\alpha f_1&=&\phantom{-}\beta f_2\phantom{.}\\[4pt] f_2'-\alpha f_2&=&-\beta f_1. \end{array}\nonumber \]

    If we let \(f_1=g_1e^{\alpha t}\) and \(f_2=g_2e^{\alpha t}\), where \(g_1\) and \(g_2\) are to be determined, then the last two equations become

    \[\begin{array}{rcr} g_1'&=&\beta g_2\phantom{.}\\[4pt] g_2'&=&-\beta g_1, \end{array}\nonumber \]

    which implies that

    \[g_1''=\beta g_2'=-\beta^2 g_1, \nonumber \]

    so

    \[g_1''+\beta^2 g_1=0. \nonumber \]

    The general solution of this equation is

    \[g_1=c_1\cos\beta t+c_2\sin\beta t. \nonumber \]

    Moreover, since \(g_2=g_1'/\beta\),

    \[g_2=-c_1\sin\beta t+c_2\cos\beta t. \nonumber \]

    Multiplying \(g_1\) and \(g_2\) by \(e^{\alpha t}\) shows that

    \[\begin{aligned} f_1&=e^{\alpha t}(\phantom{-}c_1\cos\beta t+c_2\sin\beta t ),\\[4pt] f_2&=e^{\alpha t}(-c_1\sin\beta t+c_2\cos\beta t).\end{aligned}\nonumber \]

    Substituting these into Equation \ref{eq:10.6.2} shows that

    \[\label{eq:10.6.5} \begin{array}{rcl} {\bf y}&=&e^{\alpha t}\left[(c_1\cos\beta t+c_2\sin\beta t){\bf u} +(-c_1\sin\beta t+c_2\cos\beta t){\bf v}\right]\\[4pt] &=&c_1e^{\alpha t}({\bf u}\cos\beta t-{\bf v}\sin\beta t) +c_2e^{\alpha t}({\bf u}\sin\beta t+{\bf v}\cos\beta t) \end{array} \]

    is a solution of \({\bf y}'=A{\bf y}\) for any choice of the constants \(c_1\) and \(c_2\). In particular, by first taking \(c_1=1\) and \(c_2=0\) and then taking \(c_1=0\) and \(c_2=1\), we see that \({\bf y}_1\) and \({\bf y}_2\) are solutions of \({\bf y}'=A{\bf y}\). We leave it to you to verify that they are, respectively, the real and imaginary parts of Equation \ref{eq:10.6.3} (Exercise 10.6.26), and that they are linearly independent (Exercise 10.6.27).

    Example 10.6.1

    Find the general solution of

    \[\label{eq:10.6.6} {\bf y}'=\left[\begin{array}{cc}{4}&{-5}\\[4pt]{5}&{-2}\end{array}\right]{\bf y}. \]

    Solution

    The characteristic polynomial of the coefficient matrix \(A\) in Equation \ref{eq:10.6.6} is

    \[\left|\begin{array}{cc} 4-\lambda&-5\\[4pt] 5&-2-\lambda \end{array}\right|=(\lambda-1)^2+16. \nonumber \]

    Hence, \(\lambda=1+4i\) is an eigenvalue of \(A\). The associated eigenvectors satisfy \(\left(A-\left(1+4i\right)I\right){\bf x}={\bf 0}\). The augmented matrix of this system is

    \[\left[\begin{array}{cccr} 3-4i&-5&\vdots&0\\[4pt] 5&-3-4i&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{cccr} 1&-{3+4i\over5}&\vdots&0\\[4pt] 0&0&\vdots&0 \end{array}\right]. \nonumber \]

    Therefore \(x_1=(3+4i)x_2/5\). Taking \(x_2=5\) yields \(x_1=3+4i\), so

    \[{\bf x}=\left[\begin{array}{c}3+4i\\[4pt]5\end{array}\right] \nonumber \]

    is an eigenvector. The real and imaginary parts of

    \[e^t(\cos4t+i\sin4t)\left[\begin{array}{c}3+4i\\[4pt]5\end{array}\right] \nonumber \]

    are

    \[{\bf y}_1=e^t\left[\begin{array}{c}3\cos4t-4\sin 4t\\[4pt]5\cos4t\end{array}\right]\quad\text{ and }\quad {\bf y}_2=e^t\left[\begin{array}{c}3\sin4t+4\cos4t\\[4pt]5\sin 4t\end{array}\right], \nonumber \]

    which are linearly independent solutions of Equation \ref{eq:10.6.6}. The general solution of Equation \ref{eq:10.6.6} is

    \[{\bf y}= c_1e^t\left[\begin{array}{c}3\cos4t-4\sin 4t\\[4pt]5\cos4t\end{array}\right]+ c_2e^t\left[\begin{array}{c}3\sin4t+4\cos4t\\[4pt]5\sin 4t\end{array}\right]. \nonumber \]

    Example 10.6.2

    Find the general solution of

    \[\label{eq:10.6.7} {\bf y}'=\left[\begin{array}{cc}{-14}&{39}\\[4pt]{-6}&{16}\end{array}\right]{\bf y}. \]

    Solution

    The characteristic polynomial of the coefficient matrix \(A\) in Equation \ref{eq:10.6.7} is

    \[\left|\begin{array}{cc}-14-\lambda&39\\[4pt]-6&16-\lambda \end{array}\right|=(\lambda-1)^2+9. \nonumber \]

    Hence, \(\lambda=1+3i\) is an eigenvalue of \(A\). The associated eigenvectors satisfy \(\left(A-(1+3i)I\right){\bf x}={\bf 0}\). The augmented augmented matrix of this system is

    \[\left[\begin{array}{cccr}-15-3i&39&\vdots&0\\[4pt] -6&15-3i&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{cccr} 1&{-5+i\over2}&\vdots&0\\[4pt] 0&0&\vdots&0 \end{array}\right]. \nonumber \]

    Therefore \(x_1=(5-i)/2\). Taking \(x_2=2\) yields \(x_1=5-i\), so

    \[{\bf x}=\left[\begin{array}{c}5-i\\[4pt]2\end{array}\right] \nonumber \]

    is an eigenvector. The real and imaginary parts of

    \[e^t(\cos3t+i\sin3t)\left[\begin{array}{c}5-i\\[4pt]2\end{array}\right] \nonumber \]

    are

    \[{\bf y}_1=e^t\left[\begin{array}{c}\sin3t+5\cos3t\\[4pt]2\cos 3t\end{array}\right]\quad\text{ and }\quad {\bf y}_2=e^t\left[\begin{array}{c}-\cos3t+5\sin3t\\[4pt]2\sin 3t\end{array}\right], \nonumber \]

    which are linearly independent solutions of Equation \ref{eq:10.6.7}. The general solution of Equation \ref{eq:10.6.7} is

    \[{\bf y}=c_1e^t\left[\begin{array}{c}\sin3t+5\cos3t\\[4pt]2\cos 3t\end{array}\right]+ c_2e^t\left[\begin{array}{c}-\cos3t+5\sin3t\\[4pt]2\sin 3t\end{array}\right]. \nonumber \]

    Example 10.6.3

    Find the general solution of

    \[\label{eq:10.6.8} {\bf y}'=\left[\begin{array}{ccc}{-5}&{5}&{4}\\[4pt]{-8}&{7}&{6}\\[4pt]{1}&{0}&{0}\end{array}\right]{\bf y}. \]

    Solution

    The characteristic polynomial of the coefficient matrix \(A\) in Equation \ref{eq:10.6.8} is

    \[\left|\begin{array}{ccc}-5-\lambda&5&4\\[4pt]-8&7-\lambda& 6\\[4pt] \phantom{-}1 &0&-\lambda\end{array}\right|=-(\lambda-2)(\lambda^2+1). \nonumber \]

    Hence, the eigenvalues of \(A\) are \(\lambda_1=2\), \(\lambda_2=i\), and \(\lambda_3=-i\). The augmented matrix of \((A-2I){\bf x=0}\) is

    \[\left[\begin{array}{rrrcr}-7&5&4&\vdots&0\\[4pt]-8& 5&6&\vdots&0\\[4pt] 1&0&-2&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{rrrcr} 1&0&-2&\vdots&0\\[4pt] 0&1&-2& \vdots&0\\[4pt] 0&0&0&\vdots&0\end{array}\right]. \nonumber \]

    Therefore \(x_1=x_2=2x_3\). Taking \(x_3=1\) yields

    \[{\bf x}_1=\threecol221, \nonumber \]

    so

    \[{\bf y}_1=\threecol221e^{2t} \nonumber \]

    is a solution of Equation \ref{eq:10.6.8}.

    The augmented matrix of \((A-iI){\bf x=0}\) is

    \[\left[\begin{array}{ccrccc}-5-i&5&4&\vdots&0\\[4pt]-8& 7-i&6&\vdots&0\\[4pt] \phantom{-}1&0&-i&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{ccccc} 1&0&-i&\vdots&0\\[4pt] 0&1&1-i& \vdots&0\\[4pt] 0&0&0&\vdots&0\end{array}\right]. \nonumber \]

    Therefore \(x_1=ix_3\) and \(x_2=-(1-i)x_3\). Taking \(x_3=1\) yields the eigenvector

    \[{\bf x}_2=\left[\begin{array}{c} i\\[4pt]-1+i\\[4pt] 1\end{array} \right]. \nonumber \]

    The real and imaginary parts of

    \[(\cos t+i\sin t)\left[\begin{array}{c}i\\[4pt]-1+i\\[4pt]1\end{array}\right] \nonumber \]

    are

    \[{\bf y}_2= \left[\begin{array}{c}-\sin t\\[4pt]-\cos t-\sin t\\[4pt]\cos t\end{array}\right] \quad\text{ and }\quad {\bf y}_3=\left[\begin{array}{c}\cos t\\[4pt]\cos t-\sin t\\[4pt]\sin t\end{array}\right], \nonumber \]

    which are solutions of Equation \ref{eq:10.6.8}. Since the Wronskian of \(\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}\) at \(t=0\) is

    \[\left|\begin{array}{rrr} 2&0&1\\[4pt]2&-1&1\\[4pt]1&1&0\end{array}\right|=1, \nonumber \]

    \(\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}\) is a fundamental set of solutions of Equation \ref{eq:10.6.8}. The general solution of Equation \ref{eq:10.6.8} is

    \[{\bf y}=c_1 \threecol221e^{2t} +c_2\left[\begin{array}{c}-\sin t\\[4pt]-\cos t-\sin t\\[4pt]\cos t\end{array}\right] +c_3\left[\begin{array}{c}\cos t\\[4pt]\cos t-\sin t\\[4pt]\sin t\end{array}\right]. \nonumber \]

    Example 10.6.4

    Find the general solution of

    \[\label{eq:10.6.9} {\bf y}'=\left[\begin{array}{ccc}{1}&{-1}&{-2}\\[4pt]{1}&{3}&{2}\\[4pt]{1}&{-1}&{2}\end{array}\right]{\bf y}. \]

    Solution

    The characteristic polynomial of the coefficient matrix \(A\) in Equation \ref{eq:10.6.9} is

    \[\left|\begin{array}{ccc} 1-\lambda&-1&-2\\[4pt] 1&3-\lambda& \phantom{-}2\\[4pt] 1 &-1&2-\lambda\end{array}\right|= -(\lambda-2)\left((\lambda-2)^2+4\right). \nonumber \]

    Hence, the eigenvalues of \(A\) are \(\lambda_1=2\), \(\lambda_2=2+2i\), and \(\lambda_3=2-2i\). The augmented matrix of \((A-2I){\bf x=0}\) is

    \[\left[\begin{array}{rrrcr}-1&-1&-2&\vdots&0\\[4pt]1& 1&2&\vdots&0\\[4pt] 1&-1&0&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{rrrcr} 1&0&1&\vdots&0\\[4pt] 0&1&1& \vdots&0\\[4pt] 0&0&0&\vdots&0\end{array}\right]. \nonumber \]

    Therefore \(x_1=x_2=-x_3\). Taking \(x_3=1\) yields

    \[{\bf x}_1=\threecol{-1}{-1}1, \nonumber \]

    so

    \[{\bf y}_1=\threecol{-1}{-1}1e^{2t} \nonumber \]

    is a solution of Equation \ref{eq:10.6.9}. The augmented matrix of \(\left(A-(2+2i)I\right){\bf x=0}\) is

    \[\left[\begin{array}{ccrcc}-1-2i&-1&-2&\vdots&0\\[4pt] 1& 1-2i&\phantom{-}2&\vdots&0\\[4pt] 1&-1&-2i&\vdots&0 \end{array}\right], \nonumber \]

    which is row equivalent to

    \[\left[\begin{array}{rrrcr} 1&0&-i&\vdots&0\\[4pt] 0&1&i& \vdots&0\\[4pt] 0&0&0&\vdots&0\end{array}\right]. \nonumber \]

    Therefore \(x_1=ix_3\) and \(x_2=-ix_3\). Taking \(x_3=1\) yields the eigenvector

    \[{\bf x}_2=\threecol i{-i}1 \nonumber \]

    The real and imaginary parts of

    \[e^{2t}(\cos2t+i\sin2t)\threecol i{-i}1 \nonumber \]

    are

    \[{\bf y}_2=e^{2t}\left[\begin{array}{r}-\sin2t\\[4pt]\sin2t\\[4pt]\cos 2t\end{array}\right]\quad\text{ and }\quad {\bf y}_2=e^{2t}\left[\begin{array}{r}\cos2t\\[4pt]-\cos2t\\[4pt]\sin 2t\end{array}\right], \nonumber \]

    which are solutions of Equation \ref{eq:10.6.9}. Since the Wronskian of \(\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}\) at \(t=0\) is

    \[\left|\begin{array}{rrr} -1&0&1\\[4pt]-1&0&-1\\[4pt]1&1&0\end{array}\right|=-2, \nonumber \]

    \(\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}\) is a fundamental set of solutions of Equation \ref{eq:10.6.9}. The general solution of Equation \ref{eq:10.6.9} is

    \[{\bf y}=c_1\threecol{-1}{-1}1e^{2t}+ c_2e^{2t}\left[\begin{array}{r}-\sin2t\\[4pt]\sin2t\\[4pt]\cos 2t\end{array}\right]+ c_3e^{2t}\left[\begin{array}{r}\cos2t\\[4pt]-\cos2t\\[4pt]\sin 2t\end{array}\right]. \nonumber \]

    Geometric Properties of Solutions when \(n=2\)

    We’ll now consider the geometric properties of solutions of a \(2\times2\) constant coefficient system

    \[\label{eq:10.6.10} \twocol{y_1'}{y_2'}=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right]\twocol{y_1}{y_2} \]

    under the assumptions of this section; that is, when the matrix

    \[A=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right] \nonumber \]

    has a complex eigenvalue \(\lambda=\alpha+i\beta\) (\(\beta\ne0\)) and \({\bf x}={\bf u}+i{\bf v}\) is an associated eigenvector, where \({\bf u}\) and \({\bf v}\) have real components. To describe the trajectories accurately it is necessary to introduce a new rectangular coordinate system in the \(y_1\)-\(y_2\) plane. This raises a point that hasn’t come up before: It is always possible to choose \({\bf x}\) so that \(({\bf u},{\bf v})=0\). A special effort is required to do this, since not every eigenvector has this property. However, if we know an eigenvector that doesn’t, we can multiply it by a suitable complex constant to obtain one that does. To see this, note that if \({\bf x}\) is a \(\lambda\)-eigenvector of \(A\) and \(k\) is an arbitrary real number, then

    \[{\bf x}_1=(1+ik){\bf x}=(1+ik)({\bf u}+i{\bf v}) =({\bf u}-k{\bf v})+i({\bf v}+k{\bf u}) \nonumber \]

    is also a \(\lambda\)-eigenvector of \(A\), since

    \[A{\bf x}_1= A((1+ik){\bf x})=(1+ik)A{\bf x}=(1+ik)\lambda{\bf x}= \lambda((1+ik){\bf x})=\lambda{\bf x}_1. \nonumber \]

    The real and imaginary parts of \({\bf x}_1\) are

    \[\label{eq:10.6.11} {\bf u}_1={\bf u}-k{\bf v} \quad \text{and} \quad {\bf v}_1={\bf v}+k{\bf u}, \]

    so

    \[({\bf u}_1,{\bf v}_1)=({\bf u}-k{\bf v},{\bf v}+k{\bf u}) =-\left[({\bf u},{\bf v})k^2+(\|{\bf v}\|^2-\|{\bf u}\|^2)k -({\bf u},{\bf v})\right]. \nonumber \]

    Therefore \(({\bf u}_1,{\bf v}_1)=0\) if

    \[\label{eq:10.6.12} ({\bf u},{\bf v})k^2+(\|{\bf v}\|^2-\|{\bf u}\|^2)k-({\bf u},{\bf v})=0. \]

    If \(({\bf u},{\bf v})\ne0\) we can use the quadratic formula to find two real values of \(k\) such that \(({\bf u}_1,{\bf v}_1)=0\) (Exercise 10.6.28).

    Example 10.6.5

    In Example 10.6.1 we found the eigenvector

    \[{\bf x}=\ctwocol{3+4i}5=\twocol35+i\twocol40 \nonumber \]

    for the matrix of the system Equation \ref{eq:10.6.6}. Here \(\bf {u}=\twocol{3}{5}\) and \({\bf v}=\twocol40\) are not orthogonal, since \(({\bf u},{\bf v})=12\). Since \(\|{\bf v}\|^2-\|{\bf u}\|^2=-18\), Equation \ref{eq:10.6.12} is equivalent to

    \[2k^2-3k-2=0. \nonumber \]

    The zeros of this equation are \(k_1=2\) and \(k_2=-1/2\). Letting \(k=2\) in Equation \ref{eq:10.6.11} yields

    \[{\bf u}_1={\bf u}-2{\bf v}=\twocol{-5}{\phantom{-}5} \quad \text{and} \quad {\bf v}_1={\bf v}+2{\bf u}=\twocol{10}{10}, \nonumber \]

    and \(({\bf u}_1,{\bf v}_1)=0\). Letting \(k=-1/2\) in Equation \ref{eq:10.6.11} yields

    \[{\bf u}_1={\bf u}+{{\bf v}\over2}=\twocol{5}5 \quad \text{and} \quad {\bf v}_1={\bf v}-{{\bf u}\over2}={1\over2}\twocol{-5}{\phantom{-}5}, \nonumber \]

    and again \(({\bf u}_1,{\bf v}_1)=0\).

    (The numbers don’t always work out as nicely as in this example. You’ll need a calculator or computer to do Exercises 10.6.29-10.6.40.) Henceforth, we’ll assume that \(({\bf u},{\bf v})=0\). Let \({\bf U}\) and \({\bf V}\) be unit vectors in the directions of \({\bf u}\) and \({\bf v}\), respectively; that is, \({\bf U}={\bf u}/\|{\bf u}\|\) and \({\bf V}={\bf v}/\|{\bf v}\|\). The new rectangular coordinate system will have the same origin as the \(y_1\)-\(y_2\) system. The coordinates of a point in this system will be denoted by \((z_1,z_2)\), where \(z_1\) and \(z_2\) are the displacements in the directions of \({\bf U}\) and \({\bf V}\), respectively. From Equation \ref{eq:10.6.5}, the solutions of Equation \ref{eq:10.6.10} are given by

    \[\label{eq:10.6.13} {\bf y}=e^{\alpha t}\left[(c_1\cos\beta t+c_2\sin\beta t){\bf u} +(-c_1\sin\beta t+c_2\cos\beta t){\bf v}\right]. \]

    For convenience, let’s call the curve traversed by \(e^{-\alpha t}{\bf y}(t)\) a shadow trajectory of Equation \ref{eq:10.6.10}. Multiplying Equation \ref{eq:10.6.13} by \(e^{-\alpha t}\) yields

    \[e^{-\alpha t}{\bf y}(t)=z_1(t){\bf U}+z_2(t){\bf V}, \nonumber \]

    where

    \[\begin{aligned} z_1(t)&=\|{\bf u}\|(c_1\cos\beta t+c_2\sin\beta t)\\[4pt] z_2(t)&=\|{\bf v}\|(-c_1\sin\beta t+c_2\cos\beta t).\end{aligned}\nonumber \]

    Therefore

    \[{(z_1(t))^2\over\|{\bf u}\|^2}+{(z_2(t))^2\over\|{\bf v}\|^2} =c_1^2+c_2^2. \tag{verify!} \]

    which means that the shadow trajectories of Equation \ref{eq:10.6.10} are ellipses centered at the origin, with axes of symmetry parallel to \({\bf U}\) and \({\bf V}\). Since

    \[z_1'={\beta\|{\bf u}\|\over\|{\bf v}\|} z_2 \quad \text{and} \quad z_2'=-{\beta\|{\bf v}\|\over\|{\bf u}\|} z_1, \nonumber \]

    the vector from the origin to a point on the shadow ellipse rotates in the same direction that \({\bf V}\) would have to be rotated by \(\pi/2\) radians to bring it into coincidence with \({\bf U}\) (Figures 10.6.1 and 10.6.2 ).

    fig100601.svg
    fig100602.svg
    Figures 10.6.1 (left) and 10.6.2 (right): Shadow trajectories traversed clockwise.Figure 10.6.2 : Shadow trajectories traversed counterclockwise

    If \(\alpha=0\), then any trajectory of Equation \ref{eq:10.6.10} is a shadow trajectory of Equation \ref{eq:10.6.10}; therefore, if \(\lambda\) is purely imaginary, then the trajectories of Equation \ref{eq:10.6.10} are ellipses traversed periodically as indicated in Figures 10.6.1 and 10.6.2 . If \(\alpha>0\), then

    \[\lim_{t\to\infty}\|{\bf y}(t)\|=\infty \quad \text{and} \quad \lim_{t\to-\infty}{\bf y}(t)=0,\ \nonumber \]

    so the trajectory spirals away from the origin as \(t\) varies from \(-\infty\) to \(\infty\). The direction of the spiral depends upon the relative orientation of \({\bf U}\) and \({\bf V}\), as shown in Figures 10.6.3 and 10.6.4 . If \(\alpha<0\), then

    \[\lim_{t\to-\infty}\|{\bf y}(t)\|=\infty \quad \text{and} \quad \lim_{t\to\infty}{\bf y}(t)=0, \nonumber \]

    so the trajectory spirals toward the origin as \(t\) varies from \(-\infty\) to \(\infty\). Again, the direction of the spiral depends upon the relative orientation of \({\bf U}\) and \({\bf V}\), as shown in Figures 10.6.5 and 10.6.6 .

    fig010603 - no idea where this goes.svg
    fig100604.svg
    Figures 10.6.3 (left) and 10.6.4 (right): (left) \(\alpha >0\); shadow trajectory spiraling outward and (right) \(\alpha >0\); shadow trajectory spiraling outward.
    fig010605 - no idea where this goes.svg
    fig100606.svg
    Figures 10.6.5 (left) and 10.6.6 (right): (left) \(\alpha <0\); shadow trajectory spiraling inward. (right) \(\alpha <0\); shadow trajectory spiraling inward.

    This page titled 10.6: Constant Coefficient Homogeneous Systems III is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench.