# 7.2: Eigenvalues

**Definition 7.2.1.** Let \(T\) in \(\mathcal{L}(V,V)\). Then \(\lambda\) in \(\mathbb{F}\) is an **eigenvalue** of \(T\) if there exists a nonzero vector \(u\in V\) such that

\[ T u = \lambda u.\]

The vector \(u\) is called an **eigenvector** of \(T\) corresponding to the eigenvalue \(\lambda\).

Finding the eigenvalues and eigenvectors of a linear operator is one of the most important problems in Linear Algebra. We will see later that this so-called ``eigen-information'' has many uses and applications. (As an example, quantum mechanics is based upon understanding the eigenvalues and eigenvectors of operators on specifically defined vector spaces. These vector spaces are often infinite-dimensional, though, and so we do not consider them further in these notes.)

**Example 7.2.2.**

- Let \(T\) be the zero map defined by \(T(v)=0\) for all \(v\in V\). Then every vector \(u\neq 0\) is an eigenvector of \(T\) with eigenvalue \(0\).
- Let \(I\) be the identity map defined by \(I(v)=v\) for all \(v\in V\). Then every vector \(u\neq 0\) is an eigenvector of \(T\) with eigenvalue \(1\).
- The projection map \(P:\mathbb{R}^3 \to \mathbb{R}^3\) defined by \(P(x,y,z)=(x,y,0)\) has eigenvalues \(0\) and \(1\). The vector \((0,0,1)\) is an eigenvector with eigenvalue \(0\), and both \((1,0,0)\) and \((0,1,0)\) are eigenvectors with eigenvalue \(1\).
- Take the operator \(R:\mathbb{F}^2\) to \(\mathbb{F}^2\) defined by \(R(x,y)=(-y,x)\). When \(\mathbb{F}=\mathbb{R}\),

\(R\) can be interpreted as counterclockwise rotation by \(90^0\). From this interpretation, it is clear that no non-zero vector in \(\mathbb{R}^2\) is mapped to a scalar multiple of itself. Hence, for \(\mathbb{F}=\mathbb{R}\), the operator \(R\) has no eigenvalues.

For \(\mathbb{F}=\mathbb{C}\), though, the situation is significantly different! In this case, \(\lambda\in \mathbb{C}\) is an eigenvalue of \(R\) if

\[ R(x,y) = (-y,x) = \lambda (x,y) \]

so that \(y=-\lambda x\) and \(x=\lambda y\). This implies that \(y=-\lambda^2 y\), i.e., that \(\lambda^2 = -1\). The solutions are hence \(\lambda=\pm i\). One can check that \((1,-i)\) is an eigenvector with eigenvalue \(i\) and that \((1,i)\) is an eigenvector with eigenvalue \(-i\).

Eigenspaces are important examples of invariant subspaces. Let \(T\in \mathcal{L}(V,V)\), and let \(\lambda\in \mathbb{F}\) be an eigenvalue of \(T\). Then

\[ V_\lambda = \{ v\in V \mid Tv = \lambda v \} \]

is called an **eigenspace** of \(T\). Equivalently,

\[ V_\lambda = \kernel(T-\lambda I).\]

Note that \(V_\lambda \neq \{0\}\) since \(\lambda\) is an eigenvalue if and only if there exists a nonzero vector \(u\) in \(V\) such that \(Tu=\lambda u\). We can reformulate this as follows:

\(\lambda \in \mathbb{F}\) is an eigenvalue of \(T\) if and only if the operator \(T-\lambda I\) is not injective.

Since the notion of injectivity, surjectivity, and invertibility are equivalent for operators on a finite-dimensional vector space, we can equivalently say either of the following:

- \(\lambda \in \mathbb{F}\) is an eigenvalue of \(T\) if and only if the operator \(T-\lambda I\) is not surjective.
- \(\lambda \in \mathbb{F}\) is an eigenvalue of \(T\) if and only if the operator \(T-\lambda I\) is not invertible.

We close this section with two fundamental facts about eigenvalues and eigenvectors.

**Theorem 7.2.3.** *Let* \(T\in \mathcal{L}(V,V)\), *and let* \(\lambda_1,\ldots,\lambda_m\in \mathbb{F}\) *be* \(m\) *distinct eigenvalues of* \(T\) *with corresponding nonzero eigenvectors* \(v_1,\ldots,v_m\). Then \((v_1,\ldots,v_m)\) *is linearly independent.*

* Proof. *Suppose that \((v_1,\ldots,v_m)\) is linearly dependent. Then, by the Linear Dependence Lemma, there exists an index \(k \in \{2,\ldots,m\}\) such that

\[ v_k \in \Span(v_1,\ldots,v_{k-1}) \]

and such that \((v_1,\ldots,v_{k-1})\) is linearly independent. This means that there exist scalars \(a_1,\ldots,a_{k-1}\) in \(\mathbb{F}\) such that

\[ v_k = a_1 v_1 + \cdots + a_{k-1} v_{k-1}. \tag{7.2.1} \]

Applying \(T\) to both sides yields, using the fact that \(v_j\) is an eigenvector with eigenvalue \(\lambda_j\),

\[ \lambda_k v_k = a_1 \lambda_1 v_1 + \cdots + a_{k-1} \lambda_{k-1} v_{k-1}.\]

Subtracting \(\lambda_k\) times Equation **(7.2.1)** from this, we obtain

\[ 0 = (\lambda_k - \lambda_1)a_1 v_1 + \cdots + (\lambda_k-\lambda_{k-1}) a_{k-1} v_{k-1}. \]

Since \((v_1,\ldots,v_{k-1})\) is linearly independent, we must have \((\lambda_k-\lambda_j)a_j=0\) for all \(j=1,2,\ldots, k-1\). By assumption, all eigenvalues are distinct, so \(\lambda_k-\lambda_j\neq 0\), which implies that \(a_j=0\) for all \(j=1,2,\ldots,k-1\). But then, by Equation **(7.2.1)**, \(v_k=0\), which contradicts the assumption that all eigenvectors are nonzero. Hence \((v_1,\ldots,v_m)\) is linearly independent.

**Corollary 7.2.4.** *Any operator* \(T \in \mathcal{L}(V,V)\) *has at most* \(\dim(V)\) *distinct eigenvalues.*

* Proof.* Let \(\lambda_1,\ldots,\lambda_m\) be distinct eigenvalues of \(T\), and let \(v_1,\ldots,v_m\) be corresponding nonzero eigenvectors. By Theorem 7.2.3, the list \((v_1,\ldots,v_m)\) is linearly independent. Hence \(m \le \dim(V)\).

### Contributors

- Isaiah Lankham, Mathematics Department at UC Davis
- Bruno Nachtergaele, Mathematics Department at UC Davis
- Anne Schilling, Mathematics Department at UC Davis

Both hardbound and softbound versions of this textbook are available online at WorldScientific.com.