In this section we will learn how to solve linear homogeneous constant coefficient systems of ODEs by the eigenvalue method. Suppose we have such a system
\[\underbrace{\lambda \vec{v} e^{\lambda t}}_{{\vec{x}}'} = \underbrace{P\vec{v} e^{\lambda t}}_{P\vec{x}} . \nonumber \]
\[ \lambda \vec{v}= P \vec{v}. \nonumber \]
3.4.2Eigenvalue Method with Distinct Real Eigenvalues
We have the system of equations
\[ \vec{x}'=P\vec{x}. \nonumber \]
We find the eigenvalues \(\lambda_1, \lambda_2, \ldots , \lambda_n\) of the matrix \(P\), and corresponding eigenvectors \(\vec{x}_1, \vec{x}_2, \ldots , \vec{x}_n.\) Now we notice that the functions \(\vec{v}_1e^{\lambda_1t},\vec{v}_2e^{\lambda_2t}, \ldots , \vec{v}_ne^{\lambda_nt}\) are solutions of the system of equations and hence \(\vec{x}=c_1 \vec{v}_1e^{\lambda_1t}+ c_2 \vec{v}_2e^{\lambda_2t} + \cdots + c_n \vec{v}_ne^{\lambda_nt}\) is a solution.
Theorem \(\PageIndex{1}\)
Take \(\vec{x}'=P\vec{x}\). If \(P\) is an \(n \times n\) constant matrix that has \(n\) distinct real eigenvalues \(\lambda_1,\lambda_2, \ldots, \lambda_n,\) then there exist \(n\) linearly independent corresponding eigenvectors\(\vec{v}_1,\vec{v}_2, \ldots, \vec{v}_n,\) and the general solution to \(\vec{x}'=P\vec{x}\) can be written as
\[ \vec{x}=c_1 \vec{v}_1e^{\lambda_1t}+ c_2 \vec{v}_2e^{\lambda_2t} + \cdots + c_n \vec{v}_ne^{\lambda_nt}. \nonumber \]
The corresponding fundamental matrix solution is \[X(t)= [\vec{v}_1e^{\lambda_1t}~~~ \vec{v}_2e^{\lambda_2t} ~~~ \cdots ~~~ \vec{v}_ne^{\lambda_nt}]. \nonumber \] That is, \(X(t)\) is the matrix whose \(j^{\rm{th}}\) column is \(\vec{v}_je^{\lambda_jt}\).
Example \(\PageIndex{4}\)
Consider the system
\[ \vec{x}'=\left[ \begin{array}{ccc} 2 & 1 & 1 \\ 1 & 2 & 0 \\ 0 & 0 & 2 \end{array} \right] \vec{x}. \nonumber \]
Find the general solution.
Solution
Earlier, we found the eigenvalues are \(1, 2, 3.\) We found the eigenvector \( \left[ \begin{array}{c} 1 \\ 1 \\ 0 \end{array} \right] \) for the eigenvalue 3. Similarly we find the eigenvector \( \left[ \begin{array}{c} 1 \\ -1 \\ 0 \end{array} \right]\) for the eigenvalue 1, and \( \left[ \begin{array}{c} 0 \\ 1 \\ -1 \end{array} \right]\) for the eigenvalue 2 (exercise: check). Hence our general solution is
\[ \vec{x}=c_1 \left[ \begin{array}{c} 1 \\ -1 \\ 0 \end{array} \right]e^t+c_2 \left[ \begin{array}{c} 0 \\ 1 \\ -1 \end{array} \right]e^{2t} + c_3 \left[ \begin{array}{c} 1 \\ 1 \\ 0 \end{array} \right]e^{3t} = \left[ \begin{array}{c} c_1e^t + c_3e^{3t} \\ -c_1e^t+c_2e^{2t}+c_3e^{3t} \\ -c_2e^{2t} \end{array} \right]. \nonumber \]
In terms of a fundamental matrix solution
\[ \vec{x}=X(t) \vec{c}=\left[ \begin{array}{ccc} e^t & 0 & e^{3t} \\ -e^t & e^{2t} & e^{3t} \\ 0 & -e^{2t} & 0 \end{array} \right] \left[ \begin{array}{c} c_1 \\ c_2 \\ c_3 \end{array} \right].\nonumber \]
Exercise \(\PageIndex{3}\)
Check that this \(\vec{x}\) really solves the system.
Note: If we write a homogeneous linear constant coefficient \(n^{\rm{th}}\) order equation as a first order system (as we did in Section 3.1), then the eigenvalue equation
\[ det(P - \lambda I) =0 \nonumber \]
is essentially the same as the characteristic equation we got in Section 2.2 and Section 2.3.
Complex Eigenvalues
A matrix might very well have complex eigenvalues even if all the entries are real. For example, suppose that we have the system
\[\vec{x}'= \left[ \begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array} \right] \vec{x}. \nonumber \]
Let us compute the eigenvalues of the matrix \(P=\left[ \begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array} \right].\)
\[det(P - \lambda I) =det \left( \left[ \begin{array}{cc} 1- \lambda & 1 \\ -1 & 1- \lambda \end{array} \right] \right)=(1-\lambda)^2 + 1=\lambda^2-2\lambda+2= 0. \nonumber \]
Thus \( \lambda = 1 \pm i.\) The corresponding eigenvectors are also complex. First take \( \lambda = 1 - i,\)
\[\begin{align}\begin{aligned} (P-(1-i)I)\vec{v}&=\vec{0}, \\ \left[ \begin{array}{cc} i & 1 \\ -1 & i \end{array} \right] \vec{v}&=\vec{0}.\end{aligned}\end{align} \nonumber \]
The equations \(iv_1+v_2=0\) and \(-v_1+iv_2=0\) are multiples of each other. So we only need to consider one of them. After picking \(v_2=1\), for example, we have an eigenvector \(\vec{v}=\left[ \begin{array}{c}i\\1 \end{array} \right]\). In similar fashion we find that \(\left[ \begin{array}{c}-i\\1 \end{array} \right]\) is an eigenvector corresponding to the eigenvalue \(1+i\).
We could write the solution as
\[ \vec{x}=c_1\left[ \begin{array}{c} i \\ 1 \end{array} \right]e^{(1-i)t}+c_2\left[ \begin{array}{c} -i \\ 1 \end{array} \right]e^{(1+i)t}= \left[ \begin{array}{c} c_1ie^{(1-i)t}-c_2ie^{(1+i)t} \\ c_1e^{(1-i)t}+c_2e^{(1+i)t} \end{array} \right]. \nonumber \]
We would then need to look for complex values \(c_1\) and \(c_2\) to solve any initial conditions. It is perhaps not completely clear that we get a real solution. We could use Euler’s formula and do the whole song and dance we did before, but we will not. We will do something a bit smarter first.
We claim that we did not have to look for a second eigenvector (nor for the second eigenvalue). All complex eigenvalues come in pairs (because the matrix \(P\) is real).
First a small side note. The real part of a complex number \(z\) can be computed as \(\frac{z+\bar{z}}{2}\), where the bar above \( z\) means \( \overline{a+ib}=a-ib\). This operation is called the complex conjugate. If \(a\) is a real number, then \(\bar{a}=a\). Similarly we can bar whole vectors or matrices by taking the complex conjugate of every entry. If a matrix \(P\) is real, then \(\bar{P}=P\). We note that \(\overline{P \vec{x}}=\bar{P} \bar{\vec{x}}=P \bar{\vec{x}}.\). Also the complex conjugate of \(0\) is still \(0\), therefore,
\[\vec{0}=\overline{\vec{0}}= \overline{(P- \lambda I) \vec{v }} = (P- \bar{\lambda} I) \bar{\vec{v }}. \nonumber \]
So if \(\vec{v}\) is an eigenvector corresponding to the eigenvalue \(\lambda=a+ib\), then \(\bar{\vec{v}}\) is an eigenvector corresponding to the eigenvalue \(\bar{\lambda}=a-ib\).
Suppose that \(a+ib\) is a complex eigenvalue of \(P\), and \(\vec{v}\) is a corresponding eigenvector. Then
\[ \vec{x}_1=\vec{v}e^{(a+ib)t} \nonumber \]
is a solution (complex valued) of \(\vec{x}'=P\vec{x}\). Euler’s formula shows that \(\overline{e^{a+ib}}=e^{a-ib}\), and so
\[ \vec{x}_2=\overline{\vec{x_1}}=\bar{\vec{v}}e^{(a+ib)t} \nonumber \]
is also a solution. As \(\vec{x_{1}}\) and \(\vec{x_{2}}\) are solutions, the function
\[ \vec{x}_3 = {\rm{Re~}} \vec{x}_1= {\rm{Re~}} \vec{v}e^{(a+ib)t}=\frac{\vec{x}_1+\overline{\vec{x_1}}}{2}=\frac{\vec{x}_1+\vec{x_2}}{2}=\frac{1}{2}\vec{x_{1}}+\frac{1}{2}\vec{x_{2}} \nonumber \]
is also a solution. And \(\vec{x}_3\) is real-valued! Similarly as \( {\rm{Im~}}z= \frac{z-\bar{z}}{2i}\) is the imaginary part, we find that
\[ \vec{x}_4= {\rm{Im~}} \vec{x}_1= \frac{\vec{x}_1-\overline{\vec{x_1}}}{2i}=\frac{\vec{x}_1-\vec{x_2}}{2i} \nonumber \]
is also a real-valued solution. It turns out that \(\vec{x}_3\) and \(\vec{x}_4\) are linearly independent. We will use Euler’s formula to separate out the real and imaginary part.
Returning to our problem,
\[\vec{x}_1 = \begin{bmatrix} i \\ 1 \end{bmatrix} e^{(1-i)t} = \begin{bmatrix} i \\ 1 \end{bmatrix} \left( e^t \cos t - i e^t \sin t \right) = \begin{bmatrix} i e^t \cos t + e^t \sin t \\ e^t \cos t - i e^t \sin t \end{bmatrix} = \begin{bmatrix} e^t \sin t \\ e^t \cos t \end{bmatrix} + i \begin{bmatrix} e^t \cos t \\ - e^t \sin t \end{bmatrix} . \nonumber \]
Then
\[ {\rm{Re~}} \vec{x}_1= \left[ \begin{array}{c} e^t \sin t \\ e^t\cos t \end{array} \right], \quad\text{and}\quad {\rm{Im~}} \vec{x}_1= \left[ \begin{array}{c} e^t \cos t \\ -e^t\sin t \end{array} \right], \nonumber \]
are the two real-valued linearly independent solutions we seek.
Exercise \(\PageIndex{4}\)
Check that these really are solutions.
The general solution is
\[ \vec{x}=c_1 \left[ \begin{array}{c} e^t \sin t \\ e^t\cos t \end{array} \right] + c_2 \left[ \begin{array}{c} e^t \cos t \\ -e^t\sin t \end{array} \right]= \left[ \begin{array}{c} c_1 e^t \sin t +c_2 e^t \cos t \\ c_1 e^t \cos t -c_2 e^t \sin t \end{array} \right].\nonumber \]
This solution is real-valued for real \(c_1\) and \(c_2\). At this point, we would solve for any initial conditions we may have to find \(c_{1}\) and \(c_{2}\).
Let us summarize the discussion as a theorem.
Theorem \(\PageIndex{2}\)
Let \(P\) be a real-valued constant matrix. If \(P\) has a complex eigenvalue \(a+ib\) and a corresponding eigenvector \(\vec{v}\), then \(P\) also has a complex eigenvalue \(a-ib\) with a corresponding eigenvector \(\bar{\vec{v}}\). Furthermore, \(\vec{x}' = P \vec{x}\) has two linearly independent real-valued solutions
\[ \vec{x}_1= {\rm{Re~}} \vec{v} e^{(a+ib)t}, ~~~~ {\it{and}} ~~~~ \vec{x}_2= {\rm{Im~}} \vec{v} e^{(a+ib)t}. \nonumber \]
For each pair of complex eigenvalues \(a+ib\) and \(a-ib\), we get two real-valued linearly independent solutions. We then go on to the next eigenvalue, which is either a real eigenvalue or another complex eigenvalue pair. If we have \(n\) distinct eigenvalues (real or complex), then we end up with \(n\) linearly independent solutions. If we had only two equations \((n=2)\) as in the example above, then once we found two solutions we are finished, and our general solution is
\[\vec{x}=c_{1}\vec{x}_{1}+c_{2}\vec{x}_{2}=c_{1}\left(\text{Re }\vec{v}e^{(a+ib)t}\right) +c_{2}\left(\text{Im }\vec{v}e^{(a+ib)t}\right). \nonumber \]
We can now find a real-valued general solution to any homogeneous system where the matrix has distinct eigenvalues. When we have repeated eigenvalues, matters get a bit more complicated and we will look at that situation in Section 3.7.