Skip to main content
Mathematics LibreTexts

7.6: Diagonalization of \(2\times 2\) matrices and Applications

  • Page ID
    1242
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Let \(A = \begin{bmatrix} a&b\\ c&d \end{bmatrix} \in \mathbb{F}^{2\times 2}\), and recall that we can define a linear operator \(T \in \mathcal{L}(\mathbb{F}^{2})\) on \(\mathbb{F}^{2}\) by setting \(T(v) = A v\) for each \(v = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \in \mathbb{F}^2\).

    One method for finding the eigen-information of \(T\) is to analyze the solutions of the matrix equation \(A v = \lambda v\) for \(\lambda \in \mathbb{F}\) and \(v \in \mathbb{F}^{2}\). In particular, using the definition of eigenvector and eigenvalue, \(v\) is an eigenvector associated to the eigenvalue \(\lambda\) if and only if \(A v = T(v) = \lambda v\).

    A simpler method involves the equivalent matrix equation \((A - \lambda I)v = 0\), where \(I\) denotes the identity map on \(\mathbb{F}^{2}\). In particular, \(0 \neq v \in \mathbb{F}^{2}\) is an eigenvector for \(T\) associated to the eigenvalue \(\lambda \in \mathbb{F}\) if and only if the system of linear equations

    \begin{equation}
    \left.
    \begin{array}{rrrrr}
    (a - \lambda) v_{1} & + & b v_{2} & = & 0 \\
    c v_{1} & + & (d - \lambda) v_{2} & = & 0
    \end{array}
    \right\} \label{7.6.1}
    \end{equation}

    has a non-trivial solution. Moreover, System \ref{7.6.1} has a non-trivial solution if and only if the polynomial \(p(\lambda) = (a - \lambda)(d - \lambda) - bc\) evaluates to zero. (See Proof-writing Exercise 12 in Exercises for Chapter 7.)

    In other words, the eigenvalues for \(T\) are exactly the \(\lambda \in \mathbb{F}\) for which \(p(\lambda) = 0\), and the eigenvectors for \(T\) associated to an eigenvalue \(\lambda\) are exactly the non-zero vectors \(v = \begin{bmatrix} v_{1} \\ v_{2} \end{bmatrix} \in \mathbb{F}^2\) that satisfy System \ref{7.6.1}.

    Example \(\PageIndex{1}\)

    Let \(A = \begin{bmatrix} -2 & -1 \\ 5 & 2 \end{bmatrix}\). Then \(p(\lambda) = (-2 -\lambda)(2 - \lambda) - (-1)(5) = \lambda^{2} + 1\), which is equal to zero exactly when \(\lambda = \pm i\). Moreover, if \(\lambda = i\), then the System(7.6.1) becomes

    \[
    \left.
    \begin{array}{rrrrr}
    (-2 - i) v_{1} & - & v_{2} & = & 0 \\
    5 v_{1} & + & (2 - i) v_{2} & = & 0
    \end{array}
    \right\},
    \]

    which is satisfied by any vector \(v = \begin{bmatrix} v_1\\ v_2 \end{bmatrix}\in \mathbb{C}^2\) such that \(v_{2} = (-2 - i) v_{1}\). Similarly, if \(\lambda = -i\), then the System \ref{7.6.1} becomes

    \[
    \left.
    \begin{array}{rrrrr}
    (-2 + i) v_{1} & - & v_{2} & = & 0 \\
    5 v_{1} & + & (2 + i) v_{2} & = & 0
    \end{array}
    \right\},
    \]

    which is satisfied by any vector \(v = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \in \mathbb{C}^2\) such that \(v_{2} = (-2 + i) v_{1}\).

    It follows that, given \(A = \begin{bmatrix} -2 & -1 \\ 5 & 2 \end{bmatrix}\), the linear operator on \(\mathbb{C}^{2}\)defined by \(T(v) = A v\) has eigenvalues \(\lambda = \pm i\), with associated eigenvectors as described above.

    Example \(\PageIndex{2}\)

    Take the rotation \(R_\theta:\mathbb{R}^2 \to \mathbb{R}^2\) by an angle \(\theta \in [0,2\pi)\) given by the matrix

    \begin{equation*}
    R_\theta = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix}
    \end{equation*}

    Then we obtain the eigenvalues by solving the polynomial equation

    \begin{equation*}
    \begin{split}
    p(\lambda) &= (\cos \theta -\lambda)^2 + \sin^2 \theta\\
    &= \lambda^2-2\lambda \cos \theta + 1 =0,
    \end{split}
    \end{equation*}

    where we have used the fact that \(\sin^2 \theta + \cos^2 \theta =1\). Solving for \(\lambda\)in \(\mathbb{C}\), we obtain

    \begin{equation*}
    \lambda = \cos \theta \pm \sqrt{\cos^2 \theta -1} = \cos\theta \pm \sqrt{-\sin^2 \theta}
    = \cos \theta \pm i \sin \theta = e^{\pm i \theta}.
    \end{equation*}

    We see that, as an operator over the real vector space \(\mathbb{R}^2\), the operator \(R_\theta\) only has eigenvalues when \(\theta=0\) or \(\theta=\pi\). However, if we interpret the vector \(\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \in \mathbb{R}^2\)as a complex number \(z=x_1+ix_2\), then \(z\)is an eigenvector if \(R_\theta:\mathbb{C}\to\mathbb{C}\) maps \(z\mapsto \lambda z=e^{\pm i \theta}z\). Moreover, from Section 2.3, we know that multiplication by \(e^{\pm i \theta}\)corresponds to rotation by the angle \(\pm\theta\).


    This page titled 7.6: Diagonalization of \(2\times 2\) matrices and Applications is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.

    • Was this article helpful?