5.3: Symmetric and Hermitian Matrices
- Page ID
- 96168
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
When a real matrix \(A\) is equal to its transpose, \(A^{T}=A\), we say that the matrix is symmetric. When a complex matrix \(A\) is equal to its conjugate transpose, \(\mathrm{A}^{+}=\mathrm{A}\), we say that the matrix is Hermitian.
One of the reasons symmetric and Hermitian matrices are important is because their eigenvalues are real and their eigenvectors are orthogonal. Let \(\lambda_{i}\) and \(\lambda_{j}\) be eigenvalues and \(x_{i}\) and \(x_{j}\) eigenvectors of the possibly complex matrix A. We have
\[\mathrm{A} x_{i}=\lambda_{i} x_{i}, \quad \mathrm{~A} x_{j}=\lambda_{j} x_{j} . \nonumber \]
Multiplying the first equation on the left by \(x_{j}^{\dagger}\), and taking the conjugate transpose of the second equation and multiplying on the right by \(x_{i}\), we obtain
\[x_{j}^{\dagger} \mathrm{A} x_{i}=\lambda_{i} x_{j}^{\dagger} x_{i}, \quad x_{j}^{\dagger} \mathrm{A}^{\dagger} x_{i}=\bar{\lambda}_{j} x_{j}^{\dagger} x_{i} . \nonumber \]
If \(A\) is Hermitian, then \(A^{+}=A\), and subtracting the second equation from the first yields
\[\left(\lambda_{i}-\bar{\lambda}_{j}\right) x_{j}^{\dagger} x_{i}=0 \nonumber \]
If \(i=j\), then since \(x_{i}^{\dagger} x_{i}>0\), we have \(\bar{\lambda}_{i}=\lambda_{i}\) : all eigenvalues are real. If \(i \neq j\) and \(\lambda_{i} \neq \lambda_{j}\), then \(x_{j}^{\dagger} x_{i}=0\) : eigenvectors with distinct eigenvalues are orthogonal. Usually, the eigenvectors are made orthonormal, and diagonalization makes use of real orthogonal or complex unitary matrices.
Example: Diagonalize the symmetric matrix
\[\mathrm{A}=\left(\begin{array}{ll} a & b \\ b & a \end{array}\right) \nonumber \]
The characteristic equation of \(\mathrm{A}\) is given by
\[(a-\lambda)^{2}=b^{2}, \nonumber \]
with real eigenvalues \(\lambda_{1}=a+b\) and \(\lambda_{2}=a-b\). The eigenvector with eigenvalue \(\lambda_{1}\) satisfies \(-x_{1}+x_{2}=0\), and the eigenvector with eigenvalue \(\lambda_{2}\) satisfies \(x_{1}+x_{2}=0\). Normalizing the eigenvectors, we have
\[\lambda_{1}=a+b, \mathrm{X}_{1}=\frac{1}{\sqrt{2}}\left(\begin{array}{l} 1 \\ 1 \end{array}\right) ; \quad \lambda_{2}=a-b, \mathrm{X}_{2}=\frac{1}{\sqrt{2}}\left(\begin{array}{r} 1 \\ -1 \end{array}\right) \nonumber \]
Evidently, the eigenvectors are orthonormal. The diagonalization using \(\mathrm{A}=\mathrm{Q} \Lambda \mathrm{Q}^{-1}\) is given by
\[\left(\begin{array}{ll} a & b \\ b & a \end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{rr} 1 & 1 \\ 1 & -1 \end{array}\right)\left(\begin{array}{cc} a+b & 0 \\ 0 & a-b \end{array}\right) \frac{1}{\sqrt{2}}\left(\begin{array}{rr} 1 & 1 \\ 1 & -1 \end{array}\right), \nonumber \]
which can be verified directly by matrix multiplication. The matrix \(Q\) is a symmetric orthogonal matrix so that \(Q^{-1}=Q\).