11.6: Eigenvalues and Eigenvectors
- Page ID
- 120067
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)In this section, as well as in chapter 10, we will write systems of equations as \(A\vec{x}=\vec{b}\) where the matrix \(A\) is made up of the coefficients of the variables, \(\vec{x}\) represents the variables \(x_1, x_2, x_3,\) etc., and \(\vec{b}\) is made up of the constants on the right side of the equation. For example, the system
\[\begin{array}{ccccccc} 3x_1&+&4x_2&+&5x_3&=&7\\ -x_1&+&x_2&-&3x_3&=&1\\ 2x_1&-&2x_2&+&3x_3&=&5\\ \end{array}\nonumber\]
will be written as \(A\vec{x}=\vec{b}\) where
\[A=\left[\begin{array}{ccc}{3}&{4}&{5}\\{-1}&{1}&{-3}\\{2}&{-2}&{3}\end{array}\right],\quad \vec{x}=\left[\begin{array}{c}{x_1}\\{x_2}\\{x_3}\end{array} \right], \quad \text{and}\quad \vec{b}=\left[\begin{array}{c}{7}\\{1}\\{5}\end{array} \right].\]
\(\vec{x}\) and \(\vec{b}\) are called \(n\times 1\) column vectors.
Let \(A\) be an \(n\times n\) matrix, \(\vec{x}\) a nonzero \(n\times 1\) column vector and \(\lambda\) a scalar. If
\[A\vec{x}=\lambda\vec{x}, \nonumber \]
then \(\vec{x}\) is an eigenvector of \(A\) and \(\lambda\) is an eigenvalue of \(A\).
The word “eigen” is German for “proper” or “characteristic.” Therefore, an eigenvector of \(A\) is a “characteristic vector of \(A\).” This vector tells us something about \(A\).
Note that our definition requires that \(A\) be a square matrix. Also note that \(\vec{x}\) must be nonzero. Why? What if \(\vec{x}=\vec{0}\)? Then no matter what \(\lambda\) is, \(A\vec{x}=\lambda\vec{x}\). This would then imply that every number is an eigenvalue; if every number is an eigenvalue, then we wouldn’t need a definition for it. Therefore we specify that \(\vec{x}\neq\vec{0}\).
Our last comment before trying to find eigenvalues and eigenvectors for given matrices deals with “why we care.” Did we stumble upon a mathematical curiosity, or does this somehow help us build better bridges, heal the sick, send astronauts into orbit, design optical equipment, and understand quantum mechanics? The answer, of course, is “Yes." This is a wonderful topic in and of itself: we need no external application to appreciate its worth. At the same time, it has many, many applications to “the real world.” A simple Internet search on “applications of eigenvalues” will confirm this.
Before we can talk about how to find eigenvalues and eigenvectors we need the following definition:
The \(n\times n\) matrix with 1’s on the diagonal and zeros elsewhere is the \(n\times n\) identity matrix, denoted \(I_{n}\). When the context makes the dimension of the identity clear, the subscript is generally omitted.
We show a few identity matrices below.
\[I_{2}=\left[\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}\right],\quad I_{3}=\left[\begin{array}{ccc}{1}&{0}&{0}\\{0}&{1}&{0}\\{0}&{0}&{1}\end{array}\right],\quad I_{4}=\left[\begin{array}{cccc}{1}&{0}&{0}&{0}\\{0}&{1}&{0}&{0}\\{0}&{0}&{1}&{0}\\{0}&{0}&{0}&{1}\end{array}\right] \nonumber \]
Given a square matrix \(A\), we want to find a nonzero vector \(\vec{x}\) and a scalar \(\lambda\) such that \(A\vec{x}=\lambda\vec{x}\). We will solve this using the skills we developed in Section 11.5.
\[\begin{align}\begin{aligned}A\vec{x}&=\lambda\vec{x} &\text{original equation} \\ A\vec{x}-\lambda\vec{x}&=\vec{0} &\text{subtract }\lambda\vec{x}\text{ from both sides} \\ (A-\lambda I)\vec{x}&=\vec{0} &\text{factor out }\vec{x}\end{aligned}\end{align} \nonumber \]
Think about this last factorization. We are likely tempted to say
\[A\vec{x}-\lambda\vec{x}=(A-\lambda )\vec{x}, \nonumber \]
but this really doesn’t make sense. After all, what does “a matrix minus a number” mean? We need the identity matrix in order for this to be logical.
Let us now think about the equation \((A-\lambda I)\vec{x}=\vec{0}\). While it looks complicated, it really is just matrix equation of the type we solved in the previous section.
This type of equation always has a solution, namely, \(\vec{x}=\vec{0}\). However, we want \(\vec{x}\) to be an eigenvector and, by the definition, eigenvectors cannot be \(\vec{0}\).
This means that we want solutions to \((A-\lambda I)\vec{x}=\vec{0}\) other than \(\vec{x}=\vec{0}\).
A theorem in linear algebra tells us that for this to happen, we need \(\text{det}(A-\lambda I)=0\).
Let’s start our practice of this theory by finding \(\lambda\) such that \(\text{det}(A-\lambda I)=0\); that is, let’s find the eigenvalues of a matrix.
Find the eigenvalues of \(A\), that is, find \(\lambda\) such that \(\text{det}(A-\lambda I)=0\), where
\[A=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]. \nonumber \]
Solution
First, we write out what \(A-\lambda I\) is:
\[\begin{align}\begin{aligned}A-\lambda I&=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]-\lambda\left[\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}\right] \\ &=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]-\left[\begin{array}{cc}{\lambda}&{0}\\{0}&{\lambda}\end{array}\right] \\ &=\left[\begin{array}{cc}{1-\lambda}&{4}\\{2}&{3-\lambda}\end{array}\right]\end{aligned}\end{align} \nonumber \]
Therefore,
\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{cc}{1-\lambda}&{4}\\{2}&{3-\lambda}\end{array}\right| \\ &=(1-\lambda )(3-\lambda )-8 \\ &=\lambda^{2}-4\lambda -5 \end{aligned}\end{align} \nonumber \]
Since we want \(\text{det}(A-\lambda I)=0\), we want \(\lambda ^{2}-4\lambda-5=0\). This is a simple quadratic equation that is easy to factor:
\[\begin{align}\begin{aligned}\lambda^{2}-4\lambda -5&=0 \\ (\lambda -5)(\lambda +1)&=0 \\ \lambda &=-1,\: 5\end{aligned}\end{align} \nonumber \]
According to our above work, \(\text{det}(A-\lambda I)\) when \(\lambda = -1,\: 5\). Thus, the eigenvalues of \(A\) are \(-1\) and \(5\).
Find \(\vec{x}\) such that \(A\vec{x}=5\vec{x}\), where
\[A=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]. \nonumber \]
Solution
Recall that our algebra from before showed that if
\[A\vec{x}=\lambda\vec{x}\quad\text{then}\quad (A-\lambda I)\vec{x}=\vec{0}. \nonumber \]
Therefore, we need to solve the equation \((A-\lambda I)\vec{x}=\vec{0}\) for \(\vec{x}\) when \(\lambda = 5\).
\[\begin{align}\begin{aligned}A-5I&=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right] -5\left[\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}\right] \\ &=\left[\begin{array}{cc}{-4}&{4}\\{2}&{-2}\end{array}\right] \end{aligned}\end{align} \nonumber \]
To solve \((A-5I)\vec{x}=\vec{0}\), we form the augmented matrix and put it into reduced row echelon form:
\[\left[\begin{array}{ccc}{-4}&{4}&{0}\\{2}&{-2}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{-1}&{0}\\{0}&{0}&{0}\end{array}\right]. \nonumber \]
Thus
\[\begin{align}\begin{aligned} x_1 &= x_2\\ x_2 &\text{ is free}\end{aligned}\end{align} \nonumber \]
and
\[\vec{x}=\left[\begin{array}{c}{x_{1}}\\{x_{2}}\end{array}\right]=x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right]. \nonumber \]
We have infinite solutions to the equation \(A\vec{x}=5\vec{x}\); any nonzero scalar multiple of the vector \(\left[\begin{array}{c}{1}\\{1}\end{array}\right]\) is a solution. We can do a few examples to confirm this:
\[\begin{align}\begin{aligned}\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]\left[\begin{array}{c}{2}\\{2}\end{array}\right]&=\left[\begin{array}{c}{10}\\{10}\end{array}\right]=5\left[\begin{array}{c}{2}\\{2}\end{array}\right]; \\ \left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]\left[\begin{array}{c}{7}\\{7}\end{array}\right]&=\left[\begin{array}{c}{35}\\{35}\end{array}\right]=5\left[\begin{array}{c}{7}\\{7}\end{array}\right]; \\ \left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right]\left[\begin{array}{c}{-3}\\{-3}\end{array}\right]&=\left[\begin{array}{c}{-15}\\{-15}\end{array}\right]=5\left[\begin{array}{c}{-3}\\{-3}\end{array}\right]. \end{aligned}\end{align} \nonumber \]
Our method of finding the eigenvalues of a matrix \(A\) boils down to determining which values of \(\lambda\) give the matrix \((A-\lambda I)\) a determinant of \(0\). In computing \(\text{det}(A-\lambda I)\), we get a polynomial in \(\lambda\) whose roots are the eigenvalues of \(A\). This polynomial is important and so it gets its own name.
Let \(A\) be an \(n\times n\) matrix. The characteristic polynomial of \(A\) is the \(n^{\text{th}}\) degree polynomial \(p(\lambda )=\text{det}(A-\lambda I)\).
Our definition just states what the characteristic polynomial is. We know from our work so far why we care: the roots of the characteristic polynomial of an \(n\times n\) matrix \(A\) are the eigenvalues of \(A\).
In Examples \(\PageIndex{1}\) and \(\PageIndex{2}\), we found eigenvalues and eigenvectors, respectively, of a given matrix. That is, given a matrix \(A\), we found values \(\lambda\) and vectors \(\vec{x}\) such that \(A\vec{x}=\lambda\vec{x}\). The steps that follow outline the general procedure for finding eigenvalues and eigenvectors; we’ll follow this up with some examples.
Let \(A\) be an \(n\times n\) matrix.
- To find the eigenvalues of \(A\), compute \(p(\lambda )\), the characteristic polynomial of \(A\), set it equal to \(0\), then solve for \(\lambda\).
- To find the eigenvectors of \(A\), for each eigenvalue solve the system \((A-\lambda I)\vec{x}=\vec{0}\).
Find the eigenvalues of \(A\), and for each eigenvalue, find an eigenvector where
\[A=\left[\begin{array}{cc}{-3}&{15}\\{3}&{9}\end{array}\right]. \nonumber \]
Solution
To find the eigenvalues, we must compute \(\text{det}(A-\lambda I)\) and set it equal to \(0\).
\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{cc}{-3-\lambda}&{15}\\{3}&{9-\lambda}\end{array}\right| \\ &=(-3-\lambda)(9-\lambda)-45 \\ &=\lambda^{2}-6\lambda -27-45 \\ &=\lambda^{2}-6\lambda -72 \\ &=(\lambda -12)(\lambda +6)\end{aligned}\end{align} \nonumber \]
Therefore, \(\text{det}(A-\lambda I)=0\) when \(\lambda = -6\) and \(12\); these are our eigenvalues. (We should note that \(p(\lambda) =\lambda^2-6\lambda-72\) is our characteristic polynomial.) It sometimes helps to give them “names,” so we’ll say \(\lambda_1 = -6\) and \(\lambda_2 = 12\). Now we find eigenvectors.
For \(\lambda_1=-6\):
We need to solve the equation \((A-(-6)I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{ccc}{3}&{15}&{0}\\{3}&{15}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{5}&{0}\\{0}&{0}&{0}\end{array}\right]. \nonumber \]
Our solution is
\[\begin{align}\begin{aligned} x_1 &= -5x_2\\ x_2 & \text{ is free;}\end{aligned}\end{align} \nonumber \]
in vector form, we have
\[\vec{x}=x_{2}\left[\begin{array}{c}{-5}\\{1}\end{array}\right]. \nonumber \]
We may pick any nonzero value for \(x_2\) to get an eigenvector; a simple option is \(x_2 = 1\). Thus we have the eigenvector
\[\vec{x_{1}}=\left[\begin{array}{c}{-5}\\{1}\end{array}\right]. \nonumber \]
(We used the notation \(\vec{x_{1}}\) to associate this eigenvector with the eigenvalue \(\lambda_1\).)
We now repeat this process to find an eigenvector for \(\lambda_2 = 12\):
In solving \((A-12I)\vec{x}=\vec{0}\), we find
\[\left[\begin{array}{ccc}{-15}&{15}&{0}\\{3}&{-3}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{-1}&{0}\\{0}&{0}&{0}\end{array}\right]. \nonumber \]
In vector form, we have
\[\vec{x}=x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right]. \nonumber \]
Again, we may pick any nonzero value for \(x_2\), and so we choose \(x_2 = 1\). Thus an eigenvector for \(\lambda_2\) is
\[\vec{x_{2}}=\left[\begin{array}{c}{1}\\{1}\end{array}\right]. \nonumber \]
To summarize, we have:
\[\text{eigenvalue }\lambda_{1}=-6\text{ with eigenvector }\vec{x_{1}}=\left[\begin{array}{c}{-5}\\{1}\end{array}\right] \nonumber \]
and
\[\text{eigenvalue }\lambda_{2}=12\text{ with eigenvector }\vec{x_{2}}=\left[\begin{array}{c}{1}\\{1}\end{array}\right] \nonumber \]
We should take a moment and check our work: is it true that \(A\vec{x_{1}}=\lambda_{1}\vec{x_{1}}\)?
\[\begin{align}\begin{aligned} A\vec{x_{1}}&=\left[\begin{array}{cc}{-3}&{15}\\{3}&{9}\end{array}\right]\left[\begin{array}{c}{-5}\\{1}\end{array}\right] \\ &=\left[\begin{array}{c}{30}\\{-6}\end{array}\right] \\ &=(-6)\left[\begin{array}{c}{-5}\\{1}\end{array}\right] \\ &=\lambda_{1}\vec{x_{1}}.\end{aligned}\end{align} \nonumber \]
Yes; it appears we have truly found an eigenvalue/eigenvector pair for the matrix \(A\).
Let’s do another example.
Let \(A=\left[\begin{array}{cc}{-3}&{0}\\{5}&{1}\end{array}\right]\). Find the eigenvalues of \(A\) and an eigenvector for each eigenvalue.
Solution
We first compute the characteristic polynomial, set it equal to 0, then solve for \(\lambda\).
\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{cc}{-3-\lambda}&{0}\\{5}&{1-\lambda}\end{array}\right| \\ &=(-3-\lambda )(1-\lambda )\end{aligned}\end{align} \nonumber \]
From this, we see that \(\text{det}(A-\lambda I)=0\) when \(\lambda = -3, 1\). We’ll set \(\lambda_1 = -3\) and \(\lambda_2 = 1\).
Finding an eigenvector for \(\lambda_1\):
We solve \((A-(-3)I)\vec{x}=\vec{0}\) for \(\vec{x}\) by row reducing the appropriate matrix:
\[\left[\begin{array}{ccc}{0}&{0}&{0}\\{5}&{4}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{5/4}&{0}\\{0}&{0}&{0}\end{array}\right]. \nonumber \]
Our solution, in vector form, is
\[\vec{x}=x_{2}\left[\begin{array}{c}{-5/4}\\{1}\end{array}\right]. \nonumber \]
Again, we can pick any nonzero value for \(x_2\); a nice choice would eliminate the fraction. Therefore we pick \(x_2 = 4\), and find
\[\vec{x_{1}}=\left[\begin{array}{c}{-5}\\{4}\end{array}\right]. \nonumber \]
Finding an eigenvector for \(\lambda_2\):
We solve \((A-(1)I)\vec{x}=\vec{0}\) for \(\vec{x}\) by row reducing the appropriate matrix:
\[\left[\begin{array}{ccc}{-4}&{0}&{0}\\{5}&{0}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{0}&{0}\\{0}&{0}&{0}\end{array}\right]. \nonumber \]
Our first row tells us that \(x_1 = 0\), and we see that no rows/equations involve \(x_2\). We conclude that \(x_2\) is free. Therefore, our solution, in vector form, is
\[\vec{x}=x_{2}\left[\begin{array}{c}{0}\\{1}\end{array}\right]. \nonumber \]
We pick \(x_2 = 1\), and find
\[\vec{x_{2}}=\left[\begin{array}{c}{0}\\{1}\end{array}\right]. \nonumber \]
To summarize, we have: \[\text{eigenvalue } \lambda_1 = -3 \text{ with eigenvector } \vec{x_{1}} = \left[\begin{array}{c}{-5}\\{4}\end{array}\right] \nonumber \] and \[\text{eigenvalue } \lambda_2 = 1 \text{ with eigenvector } \vec{x_{2}} = \left[\begin{array}{c}{0}\\{1}\end{array}\right]. \nonumber \]
So far, our examples have involved \(2\times 2\) matrices. Let’s do an example with a \(3\times 3\) matrix.
Find the eigenvalues of \(A\), and for each eigenvalue, give one eigenvector, where
\[A=\left[\begin{array}{ccc}{-7}&{-2}&{10}\\{-3}&{2}&{3}\\{-6}&{-2}&{9}\end{array}\right]. \nonumber \]
Solution
We first compute the characteristic polynomial, set it equal to \(0\), then solve for \(\lambda\). A warning: this process is rather long. We’ll use cofactor expansion along the first row; don’t get bogged down with the arithmetic that comes from each step; just try to get the basic idea of what was done from step to step.
\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{ccc}{-7-\lambda}&{-2}&{10}\\{-3}&{2-\lambda}&{3}\\{-6}&{-2}&{9-\lambda}\end{array}\right| \\ &=(-7-\lambda)\left|\begin{array}{cc}{2-\lambda}&{3}\\{-2}&{9-\lambda}\end{array}\right| -(-2)\left|\begin{array}{cc}{-3}&{3}\\{-6}&{9-\lambda}\end{array}\right| +10\left|\begin{array}{cc}{-3}&{2-\lambda}\\{-6}&{-2}\end{array}\right| \\ &=(-7-\lambda)(\lambda^{2}-11\lambda +24)+2(3\lambda -9)+10(-6\lambda +18) \\ &=-\lambda^{3}+4\lambda^{2}-\lambda -6 \\ &=-(\lambda +1)(\lambda -2)(\lambda -3)\end{aligned}\end{align} \nonumber \]
In the last step we factored the characteristic polynomial \(-\lambda^3+4\lambda^2-\lambda -6\).
Our eigenvalues are \(\lambda_1 = -1\), \(\lambda_2 = 2\) and \(\lambda_3 = 3\). We now find corresponding eigenvectors.
For \(\lambda_1 = -1\):
We need to solve the equation \((A-(-1)I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{-6}&{-2}&{10}&{0}\\{-3}&{3}&{3}&{0}\\{-6}&{-2}&{10}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{1}&{0}&{-1.5}&{0}\\{0}&{1}&{-.5}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
Our solution, in vector form, is
\[\vec{x}=x_{3}\left[\begin{array}{c}{3/2}\\{1/2}\\{1}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_3\); a nice choice would get rid of the fractions. So we’ll set \(x_3 = 2\) and choose \(\vec{x_{1}}=\left[\begin{array}{c}{3}\\{1}\\{2}\end{array}\right]\) as our eigenvector.
For \(\lambda_2 = 2\):
We need to solve the equation \((A-2I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{-9}&{-2}&{10}&{0}\\{-3}&{0}&{3}&{0}\\{-6}&{-2}&{7}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{1}&{0}&{-1}&{0}\\{0}&{1}&{-.5}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
Our solution, in vector form, is
\[\vec{x}=x_{3}\left[\begin{array}{c}{1}\\{1/2}\\{1}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_3\); again, a nice choice would get rid of the fractions. So we’ll set \(x_3 = 2\) and choose \(\vec{x_{2}}=\left[\begin{array}{c}{2}\\{1}\\{2}\end{array}\right]\) as our eigenvector.
For \(\lambda_3 = 3\):
We need to solve the equation \((A-3I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{-10}&{-2}&{10}&{0}\\{-3}&{-1}&{3}&{0}\\{-6}&{-2}&{6}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{1}&{0}&{-1}&{0}\\{0}&{1}&{0}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
Our solution, in vector form, is (note that \(x_2 = 0\)):
\[\vec{x}=x_{3}\left[\begin{array}{c}{1}\\{0}\\{1}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_3\); an easy choice is \(x_3 = 1\), so \(\vec{x_{3}}=\left[\begin{array}{c}{1}\\{0}\\{1}\end{array}\right]\) as our eigenvector.
To summarize, we have the following eigenvalue/eigenvector pairs:
\[\text{eigenvalue } \lambda_1=-1 \text{ with eigenvector } \vec{x_{1}} = \left[\begin{array}{c}{3}\\{1}\\{2}\end{array}\right] \nonumber \] \[\text{eigenvale } \lambda_2=2 \text{ with eigenvector } \vec{x_{2}} = \left[\begin{array}{c}{2}\\{1}\\{2}\end{array}\right] \nonumber \] \[\text{eigenvalue } \lambda_3=3 \text{ with eigenvector }\vec{x_{3}} = \left[\begin{array}{c}{1}\\{0}\\{1}\end{array}\right] \nonumber \]
Let’s practice once more.
Find the eigenvalues of \(A\), and for each eigenvalue, give one eigenvector, where
\[A=\left[\begin{array}{ccc}{2}&{-1}&{1}\\{0}&{1}&{6}\\{0}&{3}&{4}\end{array}\right]. \nonumber \]
Solution
We first compute the characteristic polynomial, set it equal to \(0\), then solve for \(\lambda\). We’ll leave the details to you and just give the result.
\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{ccc}{2-\lambda}&{-1}&{1}\\{0}&{1-\lambda}&{6}\\{0}&{3}&{4-\lambda}\end{array}\right| \\ &=(2-\lambda)(\lambda-7)(\lambda+2)\end{aligned}\end{align} \nonumber \]
Our eigenvalues are \(\lambda_1 = -2\), \(\lambda_2 = 2\) and \(\lambda_3 = 7\). We now find corresponding eigenvectors.
For \(\lambda_1 = -2\):
We need to solve the equation \((A-(-2)I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{4}&{-1}&{1}&{0}\\{0}&{3}&{6}&{0}\\{0}&{3}&{6}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{1}&{0}&{3/4}&{0}\\{0}&{1}&{2}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
Our solution, in vector form, is
\[\vec{x}=x_{3}\left[\begin{array}{c}{-3/4}\\{-2}\\{1}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_3\); a nice choice would get rid of the fractions. So we’ll set \(x_3 = 4\) and choose \(\vec{x_{1}}=\left[\begin{array}{c}{-3}\\{-8}\\{4}\end{array}\right]\) as our eigenvector.
For \(\lambda_2 = 2\):
We need to solve the equation \((A-2I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{0}&{-1}&{1}&{0}\\{0}&{-1}&{6}&{0}\\{0}&{3}&{2}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{0}&{1}&{0}&{0}\\{0}&{0}&{1}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
The first two rows tell us that \(x_2 = 0\) and \(x_3 = 0\), respectively. Notice that no row/equation uses \(x_1\); we conclude that it is free. Therefore, our solution in vector form is
\[\vec{x}=x_{1}\left[\begin{array}{c}{1}\\{0}\\{0}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_1\); an easy choice is \(x_1 = 1\) and choose \(\vec{x_{2}}=\left[\begin{array}{c}{1}\\{0}\\{0}\end{array}\right]\) as our eigenvector.
For \(\lambda_3 = 7\):
We need to solve the equation \((A-7I)\vec{x}=\vec{0}\). To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\[\left[\begin{array}{cccc}{-5}&{-1}&{1}&{0}\\{0}&{-6}&{6}&{0}\\{0}&{3}&{-3}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{cccc}{1}&{0}&{0}&{0}\\{0}&{1}&{-1}&{0}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]
Our solution, in vector form, is (note that \(x_1 = 0\)):
\[\vec{x}=x_{3}\left[\begin{array}{c}{0}\\{1}\\{1}\end{array}\right]. \nonumber \]
We can pick any nonzero value for \(x_3\); an easy choice is \(x_3 = 1\), so \(\vec{x_{3}}=\left[\begin{array}{c}{0}\\{1}\\{1}\end{array}\right]\) as our eigenvector.
To summarize, we have the following eigenvalue/eigenvector pairs:
\[\text{eigenvalue } \lambda_1=-2 \text{ with eigenvector } \vec{x_{1}} = \left[\begin{array}{c}{-3}\\{-8}\\{4}\end{array}\right] \nonumber \] \[\text{eigenvalue } \lambda_2=2 \text{ with eigenvector } \vec{x_{2}} = \left[\begin{array}{c}{1}\\{0}\\{0}\end{array}\right] \nonumber \] \[\text{eigenvalue } \lambda_3=7 \text{ with eigenvector } \vec{x_{3}} = \left[\begin{array}{c}{0}\\{1}\\{1}\end{array}\right] \nonumber \]
We have only talked about eigenvalues that are distinct. However, they can repeat, or even be complex, but the ideas are similar to the above and we will discuss them as they come up in Chapter 10.


