Skip to main content
Mathematics LibreTexts

5.2: Matrix Diagonalization

  • Page ID
    96167
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    View Matrix Diagonalization on Youtube

    View Powers of a Matrix on Youtube

    View Powers of a Matrix Example on Youtube

    For concreteness, consider a 2-by-2 matrix A with eigenvalues and eigenvectors given by

    \[\lambda_{1}, \mathrm{x}_{1}=\left(\begin{array}{l} x_{11} \\ x_{21} \end{array}\right) ; \quad \lambda_{2}, \quad \mathrm{x}_{2}=\left(\begin{array}{l} x_{12} \\ x_{22} \end{array}\right) \nonumber \]

    Now, consider the matrix product and factorization

    \[A\left(\begin{array}{ll} x_{11} & x_{12} \\ x_{21} & x_{22} \end{array}\right)=\left(\begin{array}{ll} \lambda_{1} x_{11} & \lambda_{2} x_{12} \\ \lambda_{1} x_{21} & \lambda_{2} x_{22} \end{array}\right)=\left(\begin{array}{cc} x_{11} & x_{12} \\ x_{21} & x_{22} \end{array}\right)\left(\begin{array}{cc} \lambda_{1} & 0 \\ 0 & \lambda_{2} \end{array}\right) . \nonumber \]

    We define \(S\) to be the matrix whose columns are the eigenvectors of \(A\), and \(\Lambda\) to be the diagonal eigenvalue matrix. Then generalizing to any square matrix with a complete set of eigenvectors, we have

    \[\mathrm{AS}=\mathrm{S} \Lambda . \nonumber \]

    Multiplying both sides on the right or the left by \(\mathrm{S}^{-1}\), we have found

    \[\mathrm{A}=\mathrm{S} \Lambda \mathrm{S}^{-1} \text { or } \Lambda=\mathrm{S}^{-1} \mathrm{AS} . \nonumber \]

    To memorize the order of the \(S\) matrices in these formulas, just remember that \(A\) should be multiplied on the right by \(S\).

    Diagonalizing a matrix facilitates finding powers of that matrix. For instance,

    \[\mathrm{A}^{2}=\left(\mathrm{S} \Lambda \mathrm{S}^{-1}\right)\left(\mathrm{S} \Lambda \mathrm{S}^{-1}\right)=\mathrm{S} \Lambda^{2} \mathrm{~S}^{-1} \nonumber \]

    where in the 2-by-2 example, \(\Lambda^{2}\) is simply

    \[\left(\begin{array}{cc} \lambda_{1} & 0 \\ 0 & \lambda_{2} \end{array}\right)\left(\begin{array}{cc} \lambda_{1} & 0 \\ 0 & \lambda_{2} \end{array}\right)=\left(\begin{array}{cc} \lambda_{1}^{2} & 0 \\ 0 & \lambda_{2}^{2} \end{array}\right) \nonumber \]

    In general, \(\Lambda^{2}\) has the eigenvalues squared down the diagonal. More generally, for \(p\) a positive integer,

    \[\mathrm{A}^{p}=\mathrm{S} \Lambda^{p} \mathrm{~S}^{-1} \nonumber \]

    Example: Recall the Fibonacci Q-matrix, which satisfies

    \[\mathrm{Q}=\left(\begin{array}{ll} 1 & 1 \\ 1 & 0 \end{array}\right), \quad \mathrm{Q}^{n}=\left(\begin{array}{cc} F_{n+1} & F_{n} \\ F_{n} & F_{n-1} \end{array}\right) \nonumber \]

    Using \(\mathrm{Q}\) and \(\mathrm{Q}^{n}\), derive Binet’s formula for \(F_{n}\).

    The characteristic equation of \(Q\) is given by

    \[\lambda^{2}-\lambda-1=0 \text {, } \nonumber \]

    with solutions

    \[\lambda_{1}=\frac{1+\sqrt{5}}{2}=\Phi, \quad \lambda_{2}=\frac{1-\sqrt{5}}{2}=-\phi \nonumber \]

    Useful identities are

    \[\Phi=1+\phi, \quad \Phi=1 / \phi, \quad \text { and } \quad \Phi+\phi=\sqrt{5} \nonumber \]

    The eigenvector corresponding to \(\Phi\) can be found from

    \[x_{1}-\Phi x_{2}=0, \nonumber \]

    and the eigenvector corresponding to \(-\phi\) can be found from

    \[x_{1}+\phi x_{2}=0 . \nonumber \]

    Therefore, the eigenvalues and eigenvectors can be written as

    \[\lambda_{1}=\Phi, \quad \mathrm{x}_{1}=\left(\begin{array}{c} \Phi \\ 1 \end{array}\right) ; \quad \lambda_{2}=-\phi, \mathrm{x}_{2}=\left(\begin{array}{r} -\phi \\ 1 \end{array}\right) \nonumber \]

    The eigenvector matrix \(S\) becomes

    \[\mathrm{S}=\left(\begin{array}{rr} \Phi & -\phi \\ 1 & 1 \end{array}\right) \nonumber \]

    and the inverse of this 2-by-2 matrix is given by

    \[\mathrm{S}^{-1}=\frac{1}{\sqrt{5}}\left(\begin{array}{rr} 1 & \phi \\ -1 & \Phi \end{array}\right) \nonumber \]

    Our diagonalization is therefore

    \[Q=\frac{1}{\sqrt{5}}\left(\begin{array}{rr} \Phi & -\phi \\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} \Phi & 0 \\ 0 & -\phi \end{array}\right)\left(\begin{array}{rr} 1 & \phi \\ -1 & \Phi \end{array}\right) \nonumber \]

    Raising to the \(n\)th power, we have

    \[\begin{aligned} \mathrm{Q}^{n} &=\frac{1}{\sqrt{5}}\left(\begin{array}{rr} \Phi & -\phi \\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} \Phi^{n} & 0 \\ 0 & (-\phi)^{n} \end{array}\right)\left(\begin{array}{cc} 1 & \phi \\ -1 & \Phi \end{array}\right) \\ &=\frac{1}{\sqrt{5}}\left(\begin{array}{rr} \Phi & -\phi \\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} \Phi^{n} & \Phi^{n-1} \\ -(-\phi)^{n} & -(-\phi)^{n-1} \end{array}\right) \\ &=\frac{1}{\sqrt{5}}\left(\begin{array}{cc} \Phi^{n+1}-(-\phi)^{n+1} & \Phi^{n}-(-\phi)^{n} \\ \Phi^{n}-(-\phi)^{n} & \Phi^{n-1}-(-\phi)^{n-1} \end{array}\right) . \end{aligned} \nonumber \]

    Using \(Q^{n}\) written in terms of the Fibonacci numbers, we have derived Binet’s formula

    \[F_{n}=\frac{\Phi^{n}-(-\phi)^{n}}{\sqrt{5}} \nonumber \]


    This page titled 5.2: Matrix Diagonalization is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.