Skip to main content
Mathematics LibreTexts

11.1: A- Jacobians, Inverses of Matrices, and Eigenvalues

  • Page ID
    24207
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In this appendix we collect together some results on Jacobians and inverses and eigenvalues of \(2 \times 2\) matrices that are used repeatedly in the material.

    First, we consider the Taylor expansion of a vector valued function of two variables, denoted as follows:

    \[H(x,y) = \begin{pmatrix} {f(x, y)}\\ {g(x, y)} \end{pmatrix}, (x, y) \in \mathbb{R}^2 , \label{A.1}\]

    More precisely, we will need to Taylor expand such functions through second order:

    \[H(x_{0} +h, y_{0} +k) = H(x_{0}, y_{0})+DH(x_{0}, y_{0}) \begin{pmatrix} {h}\\ {k} \end{pmatrix}+\mathcal{O}(2). \label{A.2}\]

    The Taylor expansion of a scalar valued function of one variable should be familiar to most students at this level. Possibly there is less familiarity with the Taylor expansion of a vector valued function of a vector variable. However, to compute this we just Taylor expand each component of the function (which is a scalar valued function of a vector variable) in each variable, holding the other variable fixed for the expansion in that particular variable, and then we gather the results for each component into matrix form.

    Carrying this procedure out for the \(f (x, y)\) component of Equation \ref{A.1} gives:

    \[ \begin{align} f(x_{0}+h, y_{0}+k) &= f(x_{0}, y_{0}+k)+ \frac{\partial f }{\partial x} (x_{0}, y_{0}+k)h+\mathcal{O}(h^2) \\[4pt] &= f(x_{0}, y_{0})+\frac{\partial f}{\partial y}(x_{0}, y_{0})k+\mathcal{O}(k^2)+\frac{\partial f}{\partial x}(x_{0}, y_{0})h + \mathcal{O}(hk) + \mathcal{O}(h^2). \label{A.3} \end{align}\]

    The same procedure can be applied to \(g(x, y)\). Recombining the terms back into the vector expression for Equation \ref{A.1} gives:

    \[H(x_{0}+h, y_{0}+k) = \begin{pmatrix} {f(x_{0}, y_{0})}\\ {g(x_{0}, y_{0})} \end{pmatrix} +\begin{pmatrix} {\frac{\partial f}{\partial x} (x_{0}, y_{0})}&{\frac{\partial f}{\partial y} (x_{0}, y_{0})}\\ {\frac{\partial g}{\partial x} (x_{0}, y_{0})}&{\frac{\partial g}{\partial y} (x_{0}, y_{0})} \end{pmatrix} \begin{pmatrix} {h}\\ {k} \end{pmatrix} + \mathcal{O}(2), \label{A.4}\]

    Hence, the Jacobian of Equation \ref{A.1} at \((x_{0}, y_{0})\) is:

    \[\begin{pmatrix} {\frac{\partial f}{\partial x} (x_{0}, y_{0})}&{\frac{\partial f}{\partial y} (x_{0}, y_{0})}\\ {\frac{\partial g}{\partial x} (x_{0}, y_{0})}&{\frac{\partial g}{\partial y} (x_{0}, y_{0})} \end{pmatrix}, \label{A.5}\]

    which is a \(2 \times 2\) matrix of real numbers.

    We will need to compute the inverse of such matrices, as well as its eigenvalues.

    We denote a general \(2 \times 2\) matrix of real numbers:

    \[A= \begin{pmatrix} {a}&{b}\\ {c}&{d} \end{pmatrix}, a, b, c, d \in \mathbb{R}. \label{A.6}\]

    It is easy to verify that the inverse of A is given by:

    \[A^{-1} = \frac{1}{ad-bc} \begin{pmatrix} {d}&{-b}\\ {-c}&{a} \end{pmatrix}. \label{A.7}\]

    Let \(\mathbb{I}\) denote the \(2 \times 2\) identity matrix. Then the eigenvalues of A are the solutions of the characteristic equation:

    \[det (A - \lambda \mathbb{I}) = 0. \label{A.8}\]

    where "det" is notation for the determinant of the matrix. This is a quadratic equation in \(\lambda\) which has two solutions:

    \[\lambda_{1,2} = \frac{tr A}{2} \pm \frac{1}{2} \sqrt{(tr A)^2-4det A}, \label{A.9}\]

    where we have used the notation:

    \(tr A \equiv trace A = a + d\), \(det A \equiv determinant A = ad-bc\).


    This page titled 11.1: A- Jacobians, Inverses of Matrices, and Eigenvalues is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Stephen Wiggins via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.