Skip to main content
Mathematics LibreTexts

Diagonalization

  • Page ID
    218319
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Diagonalization

    Similar Matrices

    We have seen that the commutative property does not hold for matrices, so that if A is an n x n matrix, then 

            P-1AP

    is not necessarily equal to A.  For different nonsingular matrices P, the above expression will represent different matrices.  However, all such matrices share some important properties as we shall soon see.

    Definition

    Let A and B be an n x n matrices, then A is similar to B if there is a nonsingular matrix P with

            B  =  P-1AP

     

    Example

    Consider the matrices

           \( A = \begin{pmatrix} 2 & -1 \\ 1 & 5 \end{pmatrix} \)     \( P = \begin{pmatrix} 3 & 4 \\ 4 & 5 \end{pmatrix} \)

    Then 

    \( B = P^{-1}AP = \begin{pmatrix} -5 & 4 \\ 4 & -3 \end{pmatrix} \begin{pmatrix} 2 & -1 \\ 1 & 5 \end{pmatrix} \begin{pmatrix} 3 & 4 \\ 4 & 5 \end{pmatrix} \)

    is similar to A.

     

    Notice the three following facts

    1. A is similar to A.
       
    2. If A is similar to B then B is similar to A.
       
    3. If A is similar to B and B is similar to C then A is similar to C.

     

    We call a relationship with these three properties an equivalence relationship.  We will prove the third property.

    If A is similar to B and B is similar to C then there are matrices P and Q with

            B  =  P-1AP        and        C  =  Q-1BQ

    We need to find a matrix R with 

            C  =  R-1AR

    We have

            C  =  Q-1BQ  =  Q-1(P-1AP)Q  =  

            (Q-1P-1)A(PQ)  =  (PQ)-1A(PQ)  =  R-1AR


    There is a wonderful fact that we state below.

     

    Theorem

    If A and B are similar matrices, then they have the same eigenvalues.

     

    Proof

    It is enough to show that they have the same characteristic polynomials.  We have

            det(\(\lambda\)I - B)  =  det(\(\lambda\)I - P-1AP)  =  det(P-1\(\lambda\)IP - P-1AP)

            =  det(P-1(\(\lambda\)I - A)P)  =  det(\(\lambda\)I - A)


    Diagonalized Matrices

    The easiest kind of matrices to deal with are diagonal matrices.  Determinants are simple, the eigenvalues are just the diagonal entries and the eigenvectors are just elements of the standard basis.  Even the inverse is a piece of cake (if the matrix is nonsingular).  Although most matrices are not diagonal, many are diagonalizable, that is they are similar to a diagonal matrix. 

    Definition

    A matrix A is diagonalizable if A is similar to a diagonal matrix D.

            D  =  P-1AP


     The following theorem tells us when a matrix is diagonalizable and if it is how to find its similar diagonal matrix D.

    Theorem

    Let A be an n x n matrix.  Then A is diagonalizable if and only if A has n linearly independent eigenvectors.  If so, then 

            D  =  P-1AP

    If {v1, ... , vn} are the eigenvectors of A and {\(\lambda\)1, ... , \(\lambda\)n} are the corresponding eigenvalues, then 

            vj the jth column of P 

    and 

            [D]jj  =  \(\lambda\)j 

     

    Example

    In the last discussion, we saw that the matrix

         \( A = \begin{pmatrix} 1 & 3 \\ 2 & 2 \end{pmatrix} \) 

    has -1 and 4 as eigenvalues with associated eigenvectors

         \( v_{-1} = \begin{pmatrix} 3 \\ -2 \end{pmatrix} \)    \( v_{4} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \) 

    Hence

            \(P = \begin{pmatrix} 3 & 1 \\ -2 & 1 \end{pmatrix} \)     \(   D = \begin{pmatrix} -1 & 0 \\ 4 & 0 \end{pmatrix} \) 

    You can verify that

            D  =  P-1AP


    Proof of the Theorem

    If 

            D  =  P-1AP

    for some diagonal matrix D and nonsingular matrix P, then

            AP  =  PD

    Let vi be the jth column of P and [D]jj  =  \(\lambda\)j.  Then the jth column of AP is Avi and the jth column of PD is \(\lambda\)ivj.  Hence

            Avj  =  livj 

    so that vj is an eigenvector of A with corresponding eigenvalue \(\lambda\)j.  Since P has its columns as eigenvectors, and P is nonsingular, rank(P)  =  n, and the columns of P (the eigenvalues of A) are linearly independent.

    Next suppose that the eigenvalues of A are linearly independent.  Then form D and P as above.  Then since 

            Avj  =  \(\lambda\)ivj 

    The  jth column of AP equals the jth column of PD, hence AP  =  PD.  Since the columns of P are linearly independent, P is nonsingular so that

            D  =  P-1AP


    Theorem

    Let A be an n x n matrix with n real and distinct eigenvalues.  Then A is diagonalizable.

     

    Proof

    Let 

            {\(\lambda\)1, ... , \(\lambda\)k}   and    {v1, ... , vk}

    with

             rank(Span({v1, ... , vk}))  =  k - 1

     

    be the eigenvalues and eigenvectors of A.  We need to show that none of the vectors can be written as a linear combination of the rest.  Without loss of generality, we need show that the first can not be written as a linear combination of the rest.  If

             v1 = c2v2 + ... + cnvk         (1)

    We can multiply both sides of the equation by A to get 

             \(\lambda\)1v1 = Ac2v2 + ... + Acnvk  =  c2\(\lambda\)2v2 + ... + cn\(\lambda\)nvk         (2)

    Multiply (1) by \(\lambda\)1 and subtract it from (2) to get

            c2(\(\lambda\)2 - \(\lambda\)1)v2 + ... + cn(\(\lambda\)n - \(\lambda\)1)vn = 0

    Since the \(\lambda\)'s are distinct, the ci's must all be zero, which is a contradiction (otherwise the rank would be less than k - 1).  Hence 

             rankSpan({v1, ... , vk})  =  k

    for any k.  In particular, let k  =  n, and the result follows.

     

    Note that the converse certainly does not hold.  For example, the identity matrix I has 1 as all of its eigenvalues, but it is diagonalizable (it is diagonal).


     

    Steps to Diagonalize a Matrix

    1. Find the eigenvalues by finding the roots of the characteristic polynomial.
       
    2. Find the eigenvectors by finding the null space of A - \(\lambda\)iI.
    3. If the number of linearly independent vectors is n, then let P be the matrix whose columns are eigenvectors and let D be the diagonal matrix with [D]jj  =  \(\lambda\)j

    Example

    Diagonalize the matrix

           \( A = \begin{pmatrix} 3 & 1 & -1 \\ 0 & 1 & 0 \\ 2 & 1 & 0 \end{pmatrix} \) 

    Solution

    We find the characteristic polynomial

           \( det(\lambda I - A) = det(\begin{pmatrix} \lambda - 3 & 1 & -1 \\ 0 & \lambda - 1 & 0 \\ 2 & 1 & \lambda \end{pmatrix} \)    

           \( = (\lambda - 3)( \lambda - 1)\lambda + 2(\lambda - 1) = (\lambda - 1)(\lambda^2 - 3\lambda + 2) = (\lambda - 1)^2(\lambda - 2)   \) 

    The roots are 1 (with multiplicity 2) and 2 (with multiplicity 1).

    Now we find the eigenspaces associated with the eigenvalues.  We have

     

           \( rref(1I - A) = \begin{pmatrix} -2 & -1 & 1 \\ 0 & 0 & 0 \\ -2 & -1 & 1 \end{pmatrix} =  \begin{pmatrix} 1 & \frac{1}{2} & -\frac{1}{2} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\) 

    A basis for the null space is 

           \( V_1 = \{\begin{pmatrix} -1 \\ 2\\  0 \end{pmatrix}, \begin{pmatrix} 1 \\ 0\\  2 \end{pmatrix}\} \) 

    Next we find a basis for the eigenspace associated with the eigenvalue 2.  We have

           \( rref(2I - A) = \begin{pmatrix} -1 & -1 & 1 \\ 0 & 1 & 0 \\ -2 & -1 & 2 \end{pmatrix} =  \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}\) 

    A basis for the null space is 

           \( V_2 = \{\begin{pmatrix} 1 \\ 0\\  1 \end{pmatrix} \} \) 

    Now put this all together to get

            \(P = \begin{pmatrix} -1 & 1 & 1 \\ 2 & 0 & 0 \\ 0 & 2 & 1 \end{pmatrix} \)     \(   D = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \)        

     



    Back to the Matrices and Vectors Page

     

     

    Diagonalization is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?