Skip to main content
Mathematics LibreTexts

8.5: The Eigenvalue Problem- Examples

  • Page ID
    21851
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We take a look back at our previous examples in light of the results of two previous sections The Spectral Representation and The Partial Fraction Expansion of the Transfer Function. With respect to the rotation matrix

    \[B = \begin{pmatrix} {0}&{1}\\ {-1}&{0} \end{pmatrix} \nonumber\]

    we recall, see Cauchy's Theorem, that

    \[R(s) = \frac{1}{s^2+1} \begin{pmatrix} {s}&{-1}\\ {1}&{s} \end{pmatrix} \nonumber\]

    \[R(s) = \frac{1}{s-i} \begin{pmatrix} {1/2}&{-i/2}\\ {i/2}&{1/2} \end{pmatrix}+\frac{1}{s+i} \begin{pmatrix} {1/2}&{i/2}\\ {-i/2}&{1/2} \end{pmatrix} \nonumber\]

    \[R(s) = \frac{1}{s-\lambda_{1}} P_{1}+\frac{1}{s-\lambda_{2}} P_{2} \nonumber\]

    and so

    \[B = \lambda_{1}P_{1}+\lambda_{2}P_{2} = i \begin{pmatrix} {1/2}&{-i/2}\\ {i/2}&{1/2} \end{pmatrix}-i \begin{pmatrix} {1/2}&{i/2}\\ {-i/2}&{1/2} \end{pmatrix} \nonumber\]

    From \(m_{1} = m_{2} = 1\) it follows that \(\mathscr{R}(P_{1})\) and \(\mathscr{R}(P_{2})\) are actual (as opposed to generalized) eigenspaces. These column spaces are easily determined. In particular, \(\mathscr{R}(P_{1})\) is the span of

    \[e_{1} = \begin{pmatrix} {1}\\ {i} \end{pmatrix} \nonumber\]

    while \(\mathscr{R}(P_{2})\) is the span of

    \[e_{2} = \begin{pmatrix} {1}\\ {-i} \end{pmatrix} \nonumber\]

    To recapitulate, from partial fraction expansion one can read off the projections from which one can read off the eigenvectors. The reverse direction, producing projections from eigenvectors, is equally worthwhile. We laid the groundwork for this step in the discussion of Least Squares. In particular, this Least Squares projection equation stipulates that

    \[\begin{array}{ccc} {P_{1} = e_{1}(e_{1}^{T}e_{1})^{-1}e_{1}^{T}}&{and}&{P_{2} = e_{2}(e_{2}^{T}e_{2})^{-1}e_{2}^{T}} \end{array} \nonumber\]

    As \(e_{1}^{T}e_{1} = e_{1}^{T}e_{1} = 0\) these formulas can not possibly be correct. Returning to the Least Squares discussion we realize that it was, perhaps implicitly, assumed that all quantities were real. At root is the notion of the length of a complex vector. It is not the square root of the sum of squares of its components but rather the square root of the sum of squares of the magnitudes of its components. That is, recalling that the magnitude of a complex quantity \(z\) is \(\sqrt{z\overline{z}}\)

    \[\begin{array}{ccc} {(||e_{1}||)^2 \ne e_{1}^{T}e_{1}}&{rather}&{(||e_{1}||)^2 \ne \overline{e_{1}}^{T}e_{1}} \end{array} \nonumber\]

    Yes, we have had this discussion before, recall complex numbers, vectors, and matrices. The upshot of all of this is that, when dealing with complex vectors and matrices, one should conjugate before every transpose. Matlab (of course) does this automatically, i.e., the ' symbol conjugates and transposes simultaneously. We use \(x^H\) to denote 'conjugate transpose', i.e.,

    \[x^H \equiv \overline{x}^{T} \nonumber\]

    All this suggests that the desired projections are more likely

    \[\begin{array}{ccc} {P_{1} = e_{1}(e_{1}^{H}e_{1})^{-1}e_{1}^{H}}&{and}&{P_{2} = e_{2}(e_{2}^{H}e_{2})^{-1}e_{2}^{H}} \end{array} \nonumber\]


    This page titled 8.5: The Eigenvalue Problem- Examples is shared under a CC BY 1.0 license and was authored, remixed, and/or curated by Steve Cox via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.