8.5: The Eigenvalue Problem- Examples
We take a look back at our previous examples in light of the results of two previous sections The Spectral Representation and The Partial Fraction Expansion of the Transfer Function. With respect to the rotation matrix
\[B = \begin{pmatrix} {0}&{1}\\ {-1}&{0} \end{pmatrix} \nonumber\]
we recall, see Cauchy's Theorem, that
\[R(s) = \frac{1}{s^2+1} \begin{pmatrix} {s}&{-1}\\ {1}&{s} \end{pmatrix} \nonumber\]
\[R(s) = \frac{1}{s-i} \begin{pmatrix} {1/2}&{-i/2}\\ {i/2}&{1/2} \end{pmatrix}+\frac{1}{s+i} \begin{pmatrix} {1/2}&{i/2}\\ {-i/2}&{1/2} \end{pmatrix} \nonumber\]
\[R(s) = \frac{1}{s-\lambda_{1}} P_{1}+\frac{1}{s-\lambda_{2}} P_{2} \nonumber\]
and so
\[B = \lambda_{1}P_{1}+\lambda_{2}P_{2} = i \begin{pmatrix} {1/2}&{-i/2}\\ {i/2}&{1/2} \end{pmatrix}-i \begin{pmatrix} {1/2}&{i/2}\\ {-i/2}&{1/2} \end{pmatrix} \nonumber\]
From \(m_{1} = m_{2} = 1\) it follows that \(\mathscr{R}(P_{1})\) and \(\mathscr{R}(P_{2})\) are actual (as opposed to generalized) eigenspaces. These column spaces are easily determined. In particular, \(\mathscr{R}(P_{1})\) is the span of
\[e_{1} = \begin{pmatrix} {1}\\ {i} \end{pmatrix} \nonumber\]
while \(\mathscr{R}(P_{2})\) is the span of
\[e_{2} = \begin{pmatrix} {1}\\ {-i} \end{pmatrix} \nonumber\]
To recapitulate, from partial fraction expansion one can read off the projections from which one can read off the eigenvectors. The reverse direction, producing projections from eigenvectors, is equally worthwhile. We laid the groundwork for this step in the discussion of Least Squares. In particular, this Least Squares projection equation stipulates that
\[\begin{array}{ccc} {P_{1} = e_{1}(e_{1}^{T}e_{1})^{-1}e_{1}^{T}}&{and}&{P_{2} = e_{2}(e_{2}^{T}e_{2})^{-1}e_{2}^{T}} \end{array} \nonumber\]
As \(e_{1}^{T}e_{1} = e_{1}^{T}e_{1} = 0\) these formulas can not possibly be correct. Returning to the Least Squares discussion we realize that it was, perhaps implicitly, assumed that all quantities were real. At root is the notion of the length of a complex vector. It is not the square root of the sum of squares of its components but rather the square root of the sum of squares of the magnitudes of its components. That is, recalling that the magnitude of a complex quantity \(z\) is \(\sqrt{z\overline{z}}\)
\[\begin{array}{ccc} {(||e_{1}||)^2 \ne e_{1}^{T}e_{1}}&{rather}&{(||e_{1}||)^2 \ne \overline{e_{1}}^{T}e_{1}} \end{array} \nonumber\]
Yes, we have had this discussion before, recall complex numbers, vectors, and matrices. The upshot of all of this is that, when dealing with complex vectors and matrices, one should conjugate before every transpose. Matlab (of course) does this automatically, i.e., the ' symbol conjugates and transposes simultaneously. We use \(x^H\) to denote 'conjugate transpose', i.e.,
\[x^H \equiv \overline{x}^{T} \nonumber\]
All this suggests that the desired projections are more likely
\[\begin{array}{ccc} {P_{1} = e_{1}(e_{1}^{H}e_{1})^{-1}e_{1}^{H}}&{and}&{P_{2} = e_{2}(e_{2}^{H}e_{2})^{-1}e_{2}^{H}} \end{array} \nonumber\]