Skip to main content
Mathematics LibreTexts

2.3: Matrix Formulation

  • Page ID
    106202
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    We have investigated several linear systems in the plane and in the next chapter we will use some of these ideas to investigate nonlinear systems. We need a deeper insight into the solutions of planar systems. So, in this section we will recast the first order linear systems into matrix form. This will lead to a better understanding of first order systems and allow for extensions to higher dimensions and the solution of nonhomogeneous equations later in this chapter.

    We start with the usual homogeneous system in Equation (2.5). Let the unknowns be represented by the vector

    \(\mathbf{x}(t)=\left(\begin{array}{l}
    x(t) \\
    y(t)
    \end{array}\right)\)

    Then we have that

    \(\mathbf{x}^{\prime}=\left(\begin{array}{l}
    x^{\prime} \\
    y^{\prime}
    \end{array}\right)=\left(\begin{array}{l}
    a x+b y \\
    c x+d y
    \end{array}\right)=\left(\begin{array}{ll}
    a & b \\
    c & d
    \end{array}\right)\left(\begin{array}{l}
    x \\
    y
    \end{array}\right) \equiv A \mathbf{x}\)

    Here we have introduced the coefficient matrix \(A\). This is a first order vector differential equation,

    \[\mathbf{x}^{\prime}=A \mathbf{x}. \nonumber \]

    Formerly, we can write the solution as

    \[\mathbf{x}=\mathbf{x}_{0} e^{A t} \nonumber \]

    We would like to investigate the solution of our system. Our investigations will lead to new techniques for solving linear systems using matrix methods.
    We begin by recalling the solution to the specific problem (2.12). We obtained the solution to this system as

    \[\begin{gathered}
    x(t)=c_{1} e^{t}+c_{2} e^{-4 t}, \\
    y(t)=\dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t}
    \end{gathered} \label{2.35} \]

    This can be rewritten using matrix operations. Namely, we first write the solution in vector form.

    \[\begin{aligned}
    \mathbf{x} &=\left(\begin{array}{c}
    x(t) \\
    y(t)
    \end{array}\right) \\
    &=\left(\begin{array}{c}
    c_{1} e^{t}+c_{2} e^{-4 t} \\
    \dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t}
    \end{array}\right) \\
    &=\left(\begin{array}{c}
    c_{1} e^{t} \\
    \dfrac{1}{3} c_{1} e^{t}
    \end{array}\right)+\left(\begin{array}{c}
    c_{2} e^{-4 t} \\
    -\dfrac{1}{2} c_{2} e^{-4 t}
    \end{array}\right) \\
    &=c_{1}\left(\begin{array}{c}
    1 \\
    \dfrac{1}{3}
    \end{array}\right) e^{t}+c_{2}\left(\begin{array}{c}
    1 \\
    -\dfrac{1}{2}
    \end{array}\right) e^{-4 t}
    \end{aligned} \label{2.36} \]

    We see that our solution is in the form of a linear combination of vectors of the form

    \[\mathbf{x}=\mathbf{v} e^{\lambda t} \nonumber \]

    with \(\mathbf{v}\) a constant vector and \(\lambda\) a constant number. This is similar to how we began to find solutions to second order constant coefficient equations. So, for the general problem (2.3) we insert this guess. Thus,

    \[\begin{aligned}
    \mathbf{x}^{\prime} &=A \mathbf{x} \Rightarrow \\
    \lambda \mathbf{v} e^{\lambda t} &=A \mathbf{v} e^{\lambda t}
    \end{aligned} \label{2.37} \]

    For this to be true for all \(t\), we have that

    \[A \mathbf{v}=\lambda \mathbf{v} \label{2.38} \]

    This is an eigenvalue problem. \(A\) is a \(2 \times 2\) matrix for our problem, but could easily be generalized to a system of $n$ first order differential equations. We will confine our remarks for now to planar systems. However, we need to recall how to solve eigenvalue problems and then see how solutions of eigenvalue problems can be used to obtain solutions to our systems of differential equations..

    _________________________

    \[e^{x}=\sum_{k=0}^{\infty}=1+x+\dfrac{x^{2}}{2 !}+\dfrac{x^{3}}{3 !}+\cdots \nonumber \]

    So, we define

    \[e^{A}=\sum_{k=0}^{\infty}=I+A+\dfrac{A^{2}}{2 !}+\dfrac{A^{3}}{3 !}+\cdots \label{2.34} \]

    In general, it is difficult computing \(e^{A}\) unless \(A\) is diagonal.


    This page titled 2.3: Matrix Formulation is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.