15: Diagonalizing Symmetric Matrices
- Page ID
- 1737
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Symmetric matrices have many applications. For example, if we consider the shortest distance between pairs of important cities, we might get a table like this:
\[\begin{array}{c|ccc}& Davis & Seattle & San\; Francisco \\ \hline Davis & 0 & 2000 & 80 \\Seattle & 2000 & 0 & 2010 \\San \;Francisco & 80 & 2010 & 0\end{array}\]
Encoded as a matrix, we obtain:
\[M=\begin{pmatrix}0 & 2000 & 80 \\2000 & 0 & 2010 \\80 & 2010 & 0\end{pmatrix}=M^{T}.\]
Definition: symmetric Matrix
A matrix is symmetric if it obeys \[M=M^{T}.\]
One nice property of symmetric matrices is that they always have real eigenvalues. Review exercise 1 guides you through the general proof, but here's an example for \(2\times 2\) matrices:
Example \(\PageIndex{1}\):
For a general symmetric \(2\times 2\) matrix, we have:
\begin{eqnarray*}P_{\lambda} \begin{pmatrix} a & b \\ b& d \end{pmatrix}&=&\det\begin{pmatrix}\lambda-a&-b\\-b&\lambda-d \end{pmatrix}\\&=& (\lambda-a)(\lambda-d)-b^{2} \\&=& \lambda^{2}-(a+d)\lambda-b^{2}+ad\\\Rightarrow \lambda &=& \frac{a+d}{2}\pm \sqrt{b^{2}+\left(\frac{a-d}{2}\right)^{2}}.\end{eqnarray*}
Notice that the discriminant \(4b^{2}+(a-d)^{2}\) is always positive, so that the eigenvalues must be real.
Now, suppose a symmetric matrix \(M\) has two distinct eigenvalues \(\lambda \neq \mu\) and eigenvectors \(x\) and \(y\):
\[Mx=\lambda x, \qquad My=\mu y.\]
Consider the dot product \(x\cdot y = x^{T}y = y^{T}x\) and calculate:
\begin{eqnarray*}x^{T}M y &=& x^{T}\mu y = \mu x\cdot y, \textit{ and }\\x^{T}M y &=& (y^{T}Mx)^{T} \textit{ (by transposing a \(1\times 1\) matrix)}\\&=& x^{T}M^{T}y \\&=& x^{T}My \\&=& x^{T}\lambda y \\&=& \lambda x\cdot y.\end{eqnarray*}
Subtracting these two results tells us that:
\begin{eqnarray*}0 &=& x^{T}My-x^{T}My=(\mu-\lambda)\,x\cdot y.\end{eqnarray*}
Since \(\mu\) and \(\lambda\) were assumed to be distinct eigenvalues, \(\lambda-\mu\) is non-zero, and so \(x\cdot y=0\). We have proved the following theorem.
Theorem
Eigenvectors of a symmetric matrix with distinct eigenvalues are orthogonal.
Example \(\PageIndex{2}\):
The matrix \(M=\begin{pmatrix}2&1\\1&2\end{pmatrix}\) has eigenvalues determined by
\[\det(M-\lambda I)=(2-\lambda)^{2}-1=0.\]
So the eigenvalues of \(M\) are \(3\) and \(1\), and the associated eigenvectors turn out to be \(\begin{pmatrix}1\\1\end{pmatrix}\) and \(\begin{pmatrix}1\\-1\end{pmatrix}\). It is easily seen that these eigenvectors are orthogonal:
\[\begin{pmatrix}1\\1\end{pmatrix} \cdot \begin{pmatrix}1\\-1\end{pmatrix}=0\]
In chapter 14 we saw that the matrix \(P\) built from any orthonormal basis \((v_{1},\ldots, v_{n} )\) for \(\mathbb{R}^{n}\) as its columns,
\[P=\begin{pmatrix}v_{1} & \cdots & v_{n}\end{pmatrix}\, ,\]
was an orthogonal matrix:
\[P^{-1}=P^{T}, \textit{ or } PP^{T}=I=P^{T}P.\]
Moreover, given any (unit) vector \(x_{1}\), one can always find vectors \(x_{2}, \ldots, x_{n}\) such that \((x_{1},\ldots, x_{n})\) is an orthonormal basis. (Such a basis can be obtained using the Gram-Schmidt procedure.)
Now suppose \(M\) is a symmetric \(n\times n\) matrix and \(\lambda_{1}\) is an eigenvalue with eigenvector \(x_{1}\) (this is always the case because every matrix has at least one eigenvalue--see review problem 3). Let the square matrix of column vectors \(P\) be the following:
\[P=\begin{pmatrix}x_{1} & x_{2} & \cdots & x_{n}\end{pmatrix},\]
where \(x_{1}\) through \(x_{n}\) are orthonormal, and \(x_{1}\) is an eigenvector for \(M\), but the others are not necessarily eigenvectors for \(M\). Then
\[MP=\begin{pmatrix}\lambda_{1} x_{1} & Mx_{2} & \cdots & Mx_{n}\end{pmatrix}.\]
But \(P\) is an orthogonal matrix, so \(P^{-1}=P^{T}\). Then:
\begin{eqnarray*}P^{-1}=P^{T} &=& \begin{pmatrix}x_{1}^{T}\\ \vdots \\ x_{n}^{T}\end{pmatrix} \\\Rightarrow P^{T}MP &=& \begin{pmatrix}x_{1}^{T}\lambda_{1}x_{1} & * & \cdots & *\\x_{2}^{T}\lambda_{1}x_{1} & * & \cdots & *\\\vdots & & & \vdots\\x_{n}^{T}\lambda_{1}x_{1} & * & \cdots & *\\\end{pmatrix}\\&=& \begin{pmatrix}\lambda_{1} & * & \cdots & *\\0 & * & \cdots & *\\\vdots & * & & \vdots\\0 & * & \cdots & *\\\end{pmatrix}\\&=& \begin{pmatrix}\lambda_{1} & 0 & \cdots & 0\\0 & & & \\\vdots & & \hat{M} & \\0 & & & \\\end{pmatrix}\, .\\\end{eqnarray*}
The last equality follows since \(P^{T}MP\) is symmetric. The asterisks in the matrix are where “stuff'' happens; this extra information is denoted by \(\hat{M}\) in the final expression. We know nothing about \(\hat{M}\) except that it is an \((n-1)\times (n-1)\) matrix and that it is symmetric. But then, by finding an (unit) eigenvector for \(\hat{M}\), we could repeat this procedure successively. The end result would be a diagonal matrix with eigenvalues of \(M\) on the diagonal. Again, we have proved a theorem:
Theorem
Every symmetric matrix is similar to a diagonal matrix of its eigenvalues. In other words,
\[M=M^{T} \Leftrightarrow M=PDP^{T}\]
where \(P\) is an orthogonal matrix and \(D\) is a diagonal matrix whose entries are the eigenvalues of \(M\).
To diagonalize a real symmetric matrix, begin by building an orthogonal matrix from an orthonormal basis of eigenvectors:
Example \(\PageIndex{3}\):
The symmetric matrix
\[M=\begin{pmatrix}2&1\\1&2\end{pmatrix}\, ,\]
has eigenvalues \(3\) and \(1\) with eigenvectors \(\begin{pmatrix}1\\1\end{pmatrix}\) and \(\begin{pmatrix}1\\-1\end{pmatrix}\) respectively. After normalizing these eigenvectors, we build the orthogonal matrix:
\[P = \begin{pmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}}\end{pmatrix}\, .\]
Notice that \(P^{T}P=I\). Then:
\[MP = \begin{pmatrix}\frac{3}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\\frac{3}{\sqrt{2}} & \frac{-1}{\sqrt{2}}\end{pmatrix} = \begin{pmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}}\end{pmatrix} \begin{pmatrix}3 & 0 \\0 & 1\end{pmatrix}.\]
In short, \(MP=DP\), so \(D=P^{T}MP\). Then \(D\) is the diagonalized form of \(M\) and \(P\) the associated change-of-basis matrix from the standard basis to the basis of eigenvectors.
Contributor
David Cherney, Tom Denton, and Andrew Waldron (UC Davis)