13.1: Diagonalization
( \newcommand{\kernel}{\mathrm{null}\,}\)
Suppose we are lucky, and we have L:V→V, and the ordered basis B=(v1,…,vn) is a set of linearly independent eigenvectors for L, with eigenvalues λ1,…,λn. Then:
L(v1)=λ1v1L(v2)=λ2v2⋮L(vn)=λnvn
As a result, the matrix of L in the basis of eigenvectors B is diagonal:
L(x1x2⋮xn)B=((λ1λ2⋱λn)(x1x2⋮xn))B,
where all entries off of the diagonal are zero.
Suppose that V is any n-dimensional vector space. We call a linear transformation L:V↦V diagonalizable if there exists a collection of n linearly independent eigenvectors for L. In other words, L is diagonalizable if there exists a basis for V of eigenvectors for L.
In a basis of eigenvectors, the matrix of a linear transformation is diagonal. On the other hand, if an n×n matrix is diagonal, then the standard basis vectors ei must already be a set of n linearly independent eigenvectors. We have shown:
Theorem 13.1.1:
Given an ordered basis B for a vector space V and a linear transformation L:V→V, then the matrix for L in the basis B is diagonal if and only if B consists of eigenvectors for L.
Typically, however, we do not begin a problem with a basis of eigenvectors, but rather have to compute these. Hence we need to know how to change from one basis to another.
Contributor
David Cherney, Tom Denton, and Andrew Waldron (UC Davis)