Skip to main content
Mathematics LibreTexts

5: Eigenvalues and Eigenvectors

  • Page ID
    70205
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Note \(\PageIndex{1}\)

    Solve the matrix equation \(Ax=\lambda x.\)

    This chapter constitutes the core of any first course on linear algebra: eigenvalues and eigenvectors play a crucial role in most real-world applications of the subject.

    Example \(\PageIndex{1}\)

    In a population of rabbits,

    1. half of the newborn rabbits survive their first year;
    2. of those, half survive their second year;
    3. the maximum life span is three years;
    4. rabbits produce 0, 6, 8 baby rabbits in their first, second, and third years, respectively.

    What is the asymptotic behavior of this system? What will the rabbit population look like in 100 years?

    clipboard_eb2b8f3c753f98ec6104af0345484c97c.png

    Figure \(\PageIndex{1}\): Left: the population of rabbits in a given year. Right: the proportions of rabbits in that year. Choose any values you like for the starting population, and click “Advance 1 year” several times. What do you notice about the long-term behavior of the ratios? This phenomenon turns out to be due to eigenvectors.

    In Section 5.1, we will define eigenvalues and eigenvectors, and show how to compute the latter; in Section 5.2 we will learn to compute the former. In Section 5.3 we introduce the notion of similar matrices, and demonstrate that similar matrices do indeed behave similarly. In Section 5.4 we study matrices that are similar to diagonal matrices and in Section 5.5 we study matrices that are similar to rotation-scaling matrices, thus gaining a solid geometric understanding of large classes of matrices. Finally, we spend Section 5.6 presenting a common kind of application of eigenvalues and eigenvectors to real-world problems, including searching the Internet using Google’s PageRank algorithm. 

    • 5.1: Eigenvalues and Eigenvectors
      In this section, we define eigenvalues and eigenvectors. These form the most important facet of the structure theory of square matrices. As such, eigenvalues and eigenvectors tend to play a key role in the real-life applications of linear algebra.
    • 5.2: The Characteristic Polynomial
      In Section 1 we discussed how to decide whether a given number λ is an eigenvalue of a matrix, and if so, how to find all of the associated eigenvectors. In this section, we will give a method for computing all of the eigenvalues of a matrix. This does not reduce to solving a system of linear equations: indeed, it requires solving a nonlinear equation in one variable, namely, finding the roots of the characteristic polynomial.
    • 5.4: Diagonalization
      Diagonal matrices are the easiest kind of matrices to understand: they just scale the coordinate directions by their diagonal entries. This section is devoted to the question: “When is a matrix similar to a diagonal matrix?” This section is devoted to the question: “When is a matrix similar to a diagonal matrix?” We will see that the algebra and geometry of such a matrix is relatively easy to understand.
    • 5.5: Complex Eigenvalues
      An n×n matrix whose characteristic polynomial has n distinct real roots is diagonalizable: it is similar to a diagonal matrix, which is much simpler to analyze. The other possibility is that a matrix has complex roots, and that is the focus of this section. It turns out that such a matrix is similar (in the 2×2 case) to a rotation-scaling matrix, which is also relatively easy to understand.
    • 5.6: Stochastic Matrices
      This section is devoted to one common kind of application of eigenvalues: to the study of difference equations, in particular to Markov chains. We will introduce stochastic matrices, which encode this type of difference equation, and will cover in detail the most famous example of a stochastic matrix: the Google Matrix.
    • 5.3: Similarity
      In this section, we study in detail the situation when two matrices behave similarly with respect to different coordinate systems. In Section 5.4 and Section 5.5, we will show how to use eigenvalues and eigenvectors to find a simpler matrix that behaves like a given matrix.


    This page titled 5: Eigenvalues and Eigenvectors is shared under a GNU Free Documentation License 1.3 license and was authored, remixed, and/or curated by Dan Margalit & Joseph Rabinoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?