Skip to main content
Mathematics LibreTexts

6.5: Solving Constant Coefficient Systems in 2D

  • Page ID
    91082
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Before proceeding to examples, we first indicate the types of solutions that could result from the solution of a homogeneous, constant coefficient system of first order differential equations.

    We begin with the linear system of differential equations in matrix form.

    \[\dfrac{d \mathbf{x}}{d t}=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right) \mathbf{x}=A \mathbf{x} \nonumber \]

    The type of behavior depends upon the eigenvalues of matrix \(A\). The procedure is to determine the eigenvalues and eigenvectors and use them to construct the general solution.

    If we have an initial condition, \(\mathbf{x}\left(t_{0}\right)=\mathbf{x}_{0}\), we can determine the two arbitrary constants in the general solution in order to obtain the particular solution. Thus, if \(\mathbf{x}_{1}(t)\) and \(\mathbf{x}_{2}(t)\) are two linearly independent solutions \(^{2}\), then the general solution is given as

    2

    Recall that linear independence means\(c_{1} \mathbf{x}_{1}(t)+c_{2} \mathbf{x}_{2}(t)=0\) if and only if \(c_{1}, c_{2}=0\). The reader should derive the condition on the \(\mathbf{x}_{i}\) for linear independence.

    \[\mathbf{x}(t)=c_{1} \mathbf{x}_{1}(t)+c_{2} \mathbf{x}_{2}(t) \nonumber \]

    Then, setting \(t=0\), we get two linear equations for \(c_{1}\) and \(c_{2}\) :

    \[c_{1} \mathbf{x}_{1}(0)+c_{2} \mathbf{x}_{2}(0)=\mathbf{x}_{0} \nonumber \]

    The major work is in finding the linearly independent solutions. This depends upon the different types of eigenvalues that one obtains from solving the eigenvalue equation, \(\operatorname{det}(A-\lambda I)=0\). The nature of these roots indicate the form of the general solution. In Table \(6.1\) we summarize the classification of solutions in terms of the eigenvalues of the coefficient matrix. We first make some general remarks about the plausibility of these solutions and then provide examples in the following section to clarify the matrix methods for our two dimensional systems.

    The construction of the general solution in Case I is straight forward. However, the other two cases need a little explanation.

    Let’s consider Case III. Note that since the original system of equations does not have any \(i^{\prime}\) s, then we would expect real solutions. So, we look at the real and imaginary parts of the complex solution. We have that the complex solution satisfies the equation

    \[\dfrac{d}{d t}[\operatorname{Re}(\mathbf{y}(t))+i \operatorname{Im}(\mathbf{y}(t))]=A[\operatorname{Re}(\mathbf{y}(t))+i \operatorname{Im}(\mathbf{y}(t))]\nonumber \]

    Differentiating the sum and splitting the real and imaginary parts of the equation, gives

    \[\dfrac{d}{d t} \operatorname{Re}(\mathbf{y}(t))+i \dfrac{d}{d t} \operatorname{Im}(\mathbf{y}(t))=A[\operatorname{Re}(\mathbf{y}(t))]+i A[\operatorname{Im}(\mathbf{y}(t))]\nonumber \]

    Setting the real and imaginary parts equal, we have

    \[\dfrac{d}{d t} \operatorname{Re}(\mathbf{y}(t))=A[\operatorname{Re}(\mathbf{y}(t))]\nonumber \]

    and

    \[\dfrac{d}{d t} \operatorname{Im}(\mathbf{y}(t))=A[\operatorname{Im}(\mathbf{y}(t))].\nonumber \]

    Therefore, the real and imaginary parts each are linearly independent solutions of the system and the general solution can be written as a linear combination of these expressions.

    Table \(\PageIndex{1}\): Solutions Types for Planar Systems with Constant Coefficients
    Classification of the Solutions for Two Linear First Order Differential Equations

    1. Case I: Two real, distinct roots.

    Solve the eigenvalue problem \(A \mathbf{v}=\lambda \mathbf{v}\) for each eigenvalue obtaining two eigenvectors \(\mathbf{v}_{1}, \mathbf{v}_{2} .\) Then write the general solution as a linear combination \(\mathbf{x}(t)=c_{1} e^{\lambda_{1} t} \mathbf{v}_{1}+c_{2} e^{\lambda_{2} t} \mathbf{v}_{2}\)

    2. Case II: One Repeated Root
    Solve the eigenvalue problem \(A \mathbf{v}=\lambda \mathbf{v}\) for one eigenvalue \(\lambda\), obtaining the first eigenvector \(\mathbf{v}_{1} .\) One then needs a second linearly independent solution. This is obtained by solving the nonhomogeneous problem
    \(A \mathbf{v}_{2}-\lambda \mathbf{v}_{2}=\mathbf{v}_{1}\) for \(\mathbf{v}_{2}\).

    The general solution is then given by \(\mathbf{x}(t)=c_{1} e^{\lambda t} \mathbf{v}_{1}+c_{2} e^{\lambda t}\left(\mathbf{v}_{2}+t \mathbf{v}_{1}\right) .\)

    3. Case III: Two complex conjugate roots.
    Solve the eigenvalue problem \(A x=\lambda x\) for one eigenvalue, \(\lambda=\alpha+i \beta\), obtaining one eigenvector v. Note that this eigenvector may have complex entries. Thus, one can write the vector \(\mathbf{y}(t)=e^{\lambda t} \mathbf{v}=\) \(e^{\alpha t}(\cos \beta t+i \sin \beta t) \mathbf{v}\). Now, construct two linearly independent solutions to the problem using the real and imaginary parts of \(\mathbf{y}(t):\) \(\mathbf{y}_{1}(t)=\operatorname{Re}(\mathbf{y}(t))\) and \(\mathbf{y}_{2}(t)=\operatorname{Im}(\mathbf{y}(t))\). Then the general solution can be written as \(\mathbf{x}(t)=c_{1} \mathbf{y}_{1}(t)+c_{2} \mathbf{y}_{2}(t) .\)

    We now turn to Case II. Writing the system of first order equations as a second order equation for \(x(t)\) with the sole solution of the characteristic equation, \(\lambda=\dfrac{1}{2}(a+d)\), we have that the general solution takes the form

    \[x(t)=\left(c_{1}+c_{2} t\right) e^{\lambda t} \nonumber \]

    This suggests that the second linearly independent solution involves a term of the form \(\mathbf{v} t e^{\lambda t}\). It turns out that the guess that works is

    \[\mathbf{x}=t e^{\lambda t} \mathbf{v}_{1}+e^{\lambda t} \mathbf{v}_{2}\nonumber \]

    Inserting this guess into the system \(\mathbf{x}^{\prime}=A \mathbf{x}\) yields

    \[\begin{aligned} \left(t e^{\lambda t} \mathbf{v}_{1}+e^{\lambda t} \mathbf{v}_{2}\right)^{\prime} &=A\left[t e^{\lambda t} \mathbf{v}_{1}+e^{\lambda t} \mathbf{v}_{2}\right] . \\ e^{\lambda t} \mathbf{v}_{1}+\lambda t e^{\lambda t} \mathbf{v}_{1}+\lambda e^{\lambda t} \mathbf{v}_{2} &=\lambda t e^{\lambda t} \mathbf{v}_{1}+e^{\lambda t} A \mathbf{v}_{2} . \\ e^{\lambda t}\left(\mathbf{v}_{1}+\lambda \mathbf{v}_{2}\right) &=e^{\lambda t} A \mathbf{v}_{2} . \end{aligned}\label{6.70} \]

    Noting this is true for all \(t\), we find that

    \[\mathbf{v}_{1}+\lambda \mathbf{v}_{2}=A \mathbf{v}_{2} \nonumber \]

    Therefore,

    \[(A-\lambda I) \mathbf{v}_{2}=\mathbf{v}_{1}\nonumber \]

    We know everything except for \(\mathbf{v}_{2} .\) So, we just solve for it and obtain the second linearly independent solution.


    This page titled 6.5: Solving Constant Coefficient Systems in 2D is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.