Skip to main content
Mathematics LibreTexts

6.3: Matrix Formulation

  • Page ID
    91080
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We have investigated several linear systems in the plane and in the next chapter we will use some of these ideas to investigate nonlinear systems. We need a deeper insight into the solutions of planar systems. So, in this section we will recast the first order linear systems into matrix form. This will lead to a better understanding of first order systems and allow for extensions to higher dimensions and the solution of nonhomogeneous equations later in this chapter.

    We start with the usual homogeneous system in Equation 6.1.9. Let the unknowns be represented by the vector

    \[\mathbf{x}(t)=\left(\begin{array}{l} x(t) \\ y(t) \end{array}\right)\nonumber \]

    Then we have that

    \[\mathbf{x}^{\prime}=\left(\begin{array}{l} x^{\prime} \\ y^{\prime} \end{array}\right)=\left(\begin{array}{l} a x+b y \\ c x+d y \end{array}\right)=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)\left(\begin{array}{l} x \\ y \end{array}\right) \equiv A \mathbf{x}\nonumber \]

    Here we have introduced the coefficient matrix \(A\). This is a first order vector differential equation,

    \[\mathbf{x}^{\prime}=A \mathbf{x}\nonumber \]

    Formerly, we can write the solution as

    \[\mathbf{x}=\mathbf{x}_{0} e^{A t}.\nonumber \]

    You can verify that this is a solution by simply differentiating,

    \[\dfrac{d \mathbf{x}}{d t}=\mathbf{x}_{0} \dfrac{d}{d t}\left(e^{A t}\right)=A \mathbf{x}_{0} e^{A t}=A \mathbf{x}\nonumber \]

    However, there remains the question, "What does it mean to exponentiate a matrix \(?^{\prime \prime}\) The exponential of a matrix is defined using the Maclaurin series expansion

    \[e^{x}=\sum_{k=0}^{\infty}=1+x+\dfrac{x^{2}}{2 !}+\dfrac{x^{3}}{3 !}+\cdots \nonumber \]

    We define

    \[e^{A}=\sum_{k=0}^{\infty} \dfrac{1}{n !} A^{n}=I+A+\dfrac{A^{2}}{2 !}+\dfrac{A^{3}}{3 !}+\cdots \nonumber \]

    In general it is difficult to sum this series, but it is doable for some simple examples.e

    The exponential of a matrix is defined using the Maclaurin series expansion

    \[e^{x}=\sum_{k=0}^{\infty} \dfrac{x^{n}}{n !}=1\nonumber \]

    So, we define

    \[e^{A}=I+A+\dfrac{A^{2}}{2 !}+\dfrac{A^{3}}{3 !}+\cdots . \nonumber \]

    In general, it is difficult computing \(e^{A}\) unless \(A\) is diagonal.

    Example \(\PageIndex{1}\)

    Evaluate \(e^{tA}\) for \(A=\left(\begin{array}{ll}
    1 & 0 \\
    0 & 2
    \end{array}\right)\)

    \[ \begin{aligned} e^{t A} &=I+t A+\dfrac{t^{2}}{2 !} A^{2}+\dfrac{t^{3}}{3 !} A^{3}+\cdots \\ &=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)+t\left(\begin{array}{ll} 1 & 0 \\ 0 & 2 \end{array}\right)+\dfrac{t^{2}}{2 !}\left(\begin{array}{ll} 1 & 0 \\ 0 & 2 \end{array}\right)^{2}+\dfrac{t^{3}}{3 !}\left(\begin{array}{ll} 1 & 0 \\ 0 & 2 \end{array}\right)^{3}+\cdots \\ &=\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right)+t\left(\begin{array}{ll} 1 & 0 \\ 0 & 2 \end{array}\right)+\dfrac{t^{2}}{2 !}\left(\begin{array}{cc} 1 & 0 \\ 0 & 4 \end{array}\right)+\dfrac{t^{3}}{3 !}\left(\begin{array}{ll} 1 & 0 \\ 0 & 8 \end{array}\right)+\cdots \\ &\left.=\left(\begin{array}{cc} 1+t+\dfrac{t^{2}}{2 !}+\dfrac{t^{3}}{3 !} \cdots & 0 \\ 0 & 0 \end{array}\right)+\dfrac{2 t^{2}}{2 !}+\dfrac{8 t^{3}}{3 !} \cdots\right) \\ &=\left(\begin{array}{cc} e^{t} & 0 \\ 0 & e^{2 t} \end{array}\right) \end{aligned}\label{6.61} \]

    Example \(\PageIndex{2}\)

    Evaluate \(e^{t A}\) for \(A=\left(\begin{array}{ll}0 & 1 \\ 1 & 0\end{array}\right)\).

    We first note that

    \[A^{2}=\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)=I \nonumber \]

    Therefore,

    \[A^{n}=\left\{\begin{array}{cc} A, & n \text { odd } \\ I, & n \text { even } \end{array}\right.\nonumber \]

    Then, we have

    \[\begin{aligned} e^{t A} &=I+t A+\dfrac{t^{2}}{2 !} A^{2}+\dfrac{t^{3}}{3 !} A^{3}+\cdots \cdot \\ &=I+t A+\dfrac{t^{2}}{2 !} I+\dfrac{t^{3}}{3 !} A+\cdots \cdot \\ &=\left(\begin{array}{cc} 1+\dfrac{t^{2}}{2 !}+\dfrac{t^{4}}{4 !} \cdots & t+\dfrac{t^{3}}{3 !}+\dfrac{t^{5}}{5 !} \cdots \\ t+\dfrac{t^{3}}{3 !}+\dfrac{t^{5}}{5 !} \cdots & 1+\dfrac{t^{2}}{2 !}+\dfrac{t^{4}}{4 !} \cdots \end{array}\right) \\ &=\left(\begin{array}{cc} \cosh & \sinh t \\ \sinh t & \cosh t \end{array}\right) \end{aligned} \label{6.62} \]

    Since summing these infinite series might be difficult, we will now investigate the solutions of planar systems to see if we can find other approaches for solving linear systems using matrix methods. We begin by recalling the solution to the problem in Example 6.2.3.2. We obtained the solution to this system as

    \[\begin{array}{r} x(t)=c_{1} e^{t}+c_{2} e^{-4 t} \\ y(t)=\dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t} \end{array} \label{6.63} \]

    This can be rewritten using matrix operations. Namely, we first write the solution in vector form.

    \[ \begin{aligned} \mathbf{x} &=\left(\begin{array}{c} x(t) \\ y(t) \end{array}\right) \\ &=\left(\begin{array}{c} c_{1} e^{t}+c_{2} e^{-4 t} \\ \dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t} \end{array}\right) \\ &=\left(\begin{array}{c} c_{1} e^{t} \\ \dfrac{1}{3} c_{1} e^{t} \end{array}\right)+\left(\begin{array}{c} c_{2} e^{-4 t} \\ -\dfrac{1}{2} c_{2} e^{-4 t} \end{array}\right) \\ &=c_{1}\left(\begin{array}{c} 1 \\ \dfrac{1}{3} \end{array}\right) e^{t}+c_{2}\left(\begin{array}{c} 1 \\ -\dfrac{1}{2} \end{array}\right) e^{-4 t} \end{aligned}\label{6.64} \]

    We see that our solution is in the form of a linear combination of vectors of the form

    \[\mathbf{x}=\mathbf{v} e^{\lambda t} \nonumber \]

    with \(\mathbf{v}\) a constant vector and \(\lambda\) a constant number. This is similar to how we began to find solutions to second order constant coefficient equations. So, for the general problem 6.9.3 we insert this guess. Thus,

    \[ \begin{aligned} \mathbf{x}^{\prime} &=A \mathbf{x} \Rightarrow \\ \lambda \mathbf{v} e^{\lambda t} &=A \mathbf{v} e^{\lambda t} \end{aligned} \label{6.65} \]

    For this to be true for all \(t\), we have that

    \[A \mathbf{v}=\lambda \mathbf{v} \nonumber \]

    This is an eigenvalue problem. \(A\) is a \(2 \times 2\) matrix for our problem, but could easily be generalized to a system of \(n\) first order differential equations. We will confine our remarks for now to planar systems. However, we need to recall how to solve eigenvalue problems and then see how solutions of eigenvalue problems can be used to obtain solutions to our systems of differential equations.


    This page titled 6.3: Matrix Formulation is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?