Skip to main content
Mathematics LibreTexts

6.7: Theory of Homogeneous Constant Coefficient Systems

  • Page ID
    91084
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    There is a general theory for solving homogeneous, constant coefficient systems of first order differential equations. We begin by once again recalling

    clipboard_e03e8798d45892850bf909ca92a040a7d.png
    Figure \(\PageIndex{1}\): Solution Classification for Planar Systems.

    the specific problem (6.16). We obtained the solution to this system as

    \[ \begin{gathered} x(t)=c_{1} e^{t}+c_{2} e^{-4 t} \\ y(t)=\dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t} \end{gathered}\label{6.92} \]

    This time we rewrite the solution as

    \[ \begin{aligned} \mathbf{x} &=\left(\begin{array}{cc} c_{1} e^{t}+c_{2} e^{-4 t} \\ \dfrac{1}{3} c_{1} e^{t}-\dfrac{1}{2} c_{2} e^{-4 t} \end{array}\right) \\ &=\left(\begin{array}{cc} e^{t} & e^{-4 t} \\ \dfrac{1}{3} e^{t} & -\dfrac{1}{2} e^{-4 t} \end{array}\right)\left(\begin{array}{c} c_{1} \\ c_{2} \end{array}\right) \\ & \equiv \Phi(t) \mathbf{C} \end{aligned} \label{6.93} \]

    Thus, we can write the general solution as a \(2 \times 2\) matrix \(\Phi\) times an arbitrary constant vector. The matrix \(\Phi\) consists of two columns that are linearly independent solutions of the original system. This matrix is an example of what we will define as the Fundamental Matrix of solutions of the system. So, determining the Fundamental Matrix will allow us to find the general solution of the system upon multiplication by a constant matrix. In fact, we will see that it will also lead to a simple representation of the solution of the initial value problem for our system. We will outline the general theory.

    Consider the homogeneous, constant coefficient system of first order differential equations

    \[\begin{aligned} \dfrac{d x_{1}}{d t} &=a_{11} x_{1}+a_{12} x_{2}+\ldots+a_{1 n} x_{n} \\ \dfrac{d x_{2}}{d t} &=a_{21} x_{1}+a_{22} x_{2}+\ldots+a_{2 n} x_{n} \end{aligned} \nonumber \]

    \[\dfrac{d x_{n}}{d t}=a_{n 1} x_{1}+a_{n 2} x_{2}+\ldots+a_{n n} x_{n} \nonumber \]

    As we have seen, this can be written in the matrix form \(\mathbf{x}^{\prime}=A \mathbf{x}\), where

    \[\mathbf{x}=\left(\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array}\right)\nonumber \]

    and

    \[A=\left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n n} \end{array}\right)\nonumber \]

    Now, consider \(m\) vector solutions of this system: \(\phi_{1}(t), \phi_{2}(t), \ldots \phi_{m}(t)\). These solutions are said to be linearly independent on some domain if

    \[c_{1} \phi_{1}(t)+c_{2} \phi_{2}(t)+\ldots+c_{m} \phi_{m}(t)=0\nonumber \]

    for all \(t\) in the domain implies that \(c_{1}=c_{2}=\ldots=c_{m}=0\).

    Let \(\phi_{1}(t), \phi_{2}(t), \ldots \phi_{n}(t)\) be a set of \(n\) linearly independent set of solutions of our system, called a fundamental set of solutions. We construct a matrix from these solutions using these solutions as the column of that matrix. We define this matrix to be the fundamental matrix solution. This matrix takes the form

    \[\Phi=\left(\begin{array}{lll} \phi_{1} & \cdots & \phi_{n} \end{array}\right)=\left(\begin{array}{cccc} \phi_{11} & \phi_{12} & \cdots & \phi_{1 n} \\ \phi_{21} & \phi_{22} & \cdots & \phi_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ \phi_{n 1} & \phi_{n 2} & \cdots & \phi_{n n} \end{array}\right)\nonumber \]

    What do we mean by a "matrix" solution? We have assumed that each \(\phi_{k}\) is a solution of our system. Therefore, we have that \(\phi_{k}^{\prime}=A \phi_{k}\), for \(k=\) \(1, \ldots, n\). We say that \(\Phi\) is a matrix solution because we can show that \(\Phi\) also satisfies the matrix formulation of the system of differential equations. We can show this using the properties of matrices.

    \[ \begin{aligned} \dfrac{d}{d t} \Phi &=\left(\begin{array}{ccc} \phi_{1}^{\prime} & \ldots & \phi_{n}^{\prime} \end{array}\right) \\ &=\left(\begin{array}{cccc} A \phi_{1} & \cdots & A \phi_{n} \end{array}\right) \\ &=A\left(\begin{array}\phi_{1} &{\cdots} &{\phi_{n}} \end{array} \right) \\ &=A \Phi \end{aligned} \label{6.95} \]

    Given a set of vector solutions of the system, when are they linearly independent? We consider a matrix solution \(\Omega(t)\) of the system in which we have \(n\) vector solutions. Then, we define the Wronskian of \(\Omega(t)\) to be

    \[W=\operatorname{det} \Omega(t) \nonumber \]

    If \(W(t) \neq 0\), then \(\Omega(t)\) is a fundamental matrix solution.

    Before continuing, we list the fundamental matrix solutions for the set of examples in the last section. (Refer to the solutions from those examples.) Furthermore, note that the fundamental matrix solutions are not unique as one can multiply any column by a nonzero constant and still have a fundamental matrix solution.

    Example \(\PageIndex{1}\)

    \(A=\left(\begin{array}{ll}4 & 2 \\ 3 & 3\end{array}\right)\)

    \[\Phi(t)=\left(\begin{array}{cc} 2 e^{t} & e^{6 t} \\ -3 e^{t} & e^{6 t} \end{array}\right)\nonumber \]

    We should note in this case that the Wronskian is found as

    \[ \begin{aligned} W &=\operatorname{det} \Phi(t) \\ &=\left|\begin{array}{cc} 2 e^{t} & e^{6 t} \\ -3 e^{t} & e^{6 t} \end{array}\right| \\ &=5 e^{7 t} \neq 0 \end{aligned} \label{6.96} \]

    Example \(\PageIndex{2}\)

    \(A=\left(\begin{array}{rr}3 & -5 \\ 1 & -1\end{array}\right)\)

    \[\Phi(t)=\left(\begin{array}{cc} e^{t}(2 \cos t-\sin t) & e^{t}(\cos t+2 \sin t) \\ e^{t} \cos t & e^{t} \sin t \end{array}\right)\nonumber \]

    Example \(\PageIndex{3}\)

    \(A=\left(\begin{array}{cc}7 & -1 \\ 9 & 1\end{array}\right)\)

    \[\Phi(t)=\left(\begin{array}{cc} e^{4 t} & e^{4 t}(1+t) \\ 3 e^{4 t} & e^{4 t}(2+3 t) \end{array}\right)\nonumber \]

    So far we have only determined the general solution. This is done by the following steps:

    Procedure for Determining the General Solution
    1. Solve the eigenvalue problem \((A-\lambda I) \mathbf{v}=0\).
    2. Construct vector solutions from \(v e^{\lambda t}\). The method depends if one has real or complex conjugate eigenvalues.
    3. Form the fundamental solution matrix \(\Phi(t)\) from the vector solution.
    4. The general solution is given by \(\mathbf{x}(t)=\Phi(t) \mathbf{C}\) for \(\mathbf{C}\) an arbitrary constant vector.

    We are now ready to solve the initial value problem:

    \[\mathbf{x}^{\prime}=A \mathbf{x}, \quad \mathbf{x}\left(t_{0}\right)=\mathbf{x}_{0} \nonumber \]

    Starting with the general solution, we have that

    \[\mathbf{x}_{0}=\mathbf{x}\left(t_{0}\right)=\Phi\left(t_{0}\right) \mathbf{C}.\nonumber \]

    As usual, we need to solve for the \(c_{k}^{\prime} s\). Using matrix methods, this is now easy. Since the Wronskian is not zero, then we can invert \(\Phi\) at any value of \(t .\) So, we have

    \[\mathbf{C}=\Phi^{-1}\left(t_{0}\right) \mathbf{x}_{0}\nonumber \]

    Putting \(C\) back into the general solution, we obtain the solution to the initial value problem:

    \[\mathbf{x}(t)=\Phi(t) \Phi^{-1}\left(t_{0}\right) \mathbf{x}_{0}\nonumber \]

    You can easily verify that this is a solution of the system and satisfies the initial condition at \(t=t_{0}\).

    The matrix combination \(\Phi(t) \Phi^{-1}\left(t_{0}\right)\) is useful. So, we will define the resulting product to be the principal matrix solution, denoting it by

    \[\Psi(t)=\Phi(t) \Phi^{-1}\left(t_{0}\right)\nonumber \]

    Thus, the solution of the initial value problem is \(\mathbf{x}(t)=\Psi(t) \mathbf{x}_{0} .\) Furthermore, we note that \(\Psi(t)\) is a solution to the matrix initial value problem

    \[\mathbf{x}^{\prime}=A \mathbf{x}, \quad \mathbf{x}\left(t_{0}\right)=I\nonumber \]

    where \(I\) is the \(n \times n\) identity matrix.

    Matrix Solution of the Homogeneous Problem

    In summary, the matrix solution of

    \[\dfrac{d \mathbf{x}}{d t}=A \mathbf{x}, \quad \mathbf{x}\left(t_{0}\right)=\mathbf{x}_{0}\nonumber \]

    is given by

    \[\mathbf{x}(t)=\Psi(t) \mathbf{x}_{0}=\Phi(t) \Phi^{-1}\left(t_{0}\right) \mathbf{x}_{0}\nonumber \]

    where \(\Phi(t)\) is the fundamental matrix solution and \(\Psi(t)\) is the principal matrix solution.

    Example \(\PageIndex{4}\)

    Let’s consider the matrix initial value problem

    \[ \begin{aligned} &x^{\prime}=5 x+3 y \\ &y^{\prime}=-6 x-4 y \end{aligned} \label{6.97} \]

    satisfying \(x(0)=1, y(0)=2\). Find the solution of this problem.

    We first note that the coefficient matrix is

    \[A=\left(\begin{array}{cc} 5 & 3 \\ -6 & -4 \end{array}\right) \nonumber \]

    The eigenvalue equation is easily found from

    \[ \begin{aligned} 0 &=-(5-\lambda)(4+\lambda)+18 \\ &=\lambda^{2}-\lambda-2 \\ &=(\lambda-2)(\lambda+1) \end{aligned}\label{6.98} \]

    So, the eigenvalues are \(\lambda=-1,2\). The corresponding eigenvectors are found to be

    \[\mathbf{v}_{1}=\left(\begin{array}{c} 1 \\ -2 \end{array}\right), \quad \mathbf{v}_{2}=\left(\begin{array}{c} 1 \\ -1 \end{array}\right)\nonumber \]

    Now we construct the fundamental matrix solution. The columns are obtained using the eigenvectors and the exponentials, \(e^{\lambda t}:\)

    \[\phi_{1}(t)=\left(\begin{array}{c} 1 \\ -2 \end{array}\right) e^{-t}, \quad \phi_{1}(t)=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) e^{2 t}\nonumber \]

    So, the fundamental matrix solution is

    \[\Phi(t)=\left(\begin{array}{cc} e^{-t} & e^{2 t} \\ -2 e^{-t} & -e^{2 t} \end{array}\right)\nonumber \]

    The general solution to our problem is then

    \[\mathbf{x}(t)=\left(\begin{array}{cc} e^{-t} & e^{2 t} \\ -2 e^{-t} & -e^{2 t} \end{array}\right) \mathbf{C}\nonumber \]

    for \(\mathbf{C}\) is an arbitrary constant vector.

    In order to find the particular solution of the initial value problem, we need the principal matrix solution. We first evaluate \(\Phi(0)\), then we invert it:

    \[\Phi(0)=\left(\begin{array}{cc} 1 & 1 \\ -2 & -1 \end{array}\right) \quad \Rightarrow \quad \Phi^{-1}(0)=\left(\begin{array}{cc} -1 & -1 \\ 2 & 1 \end{array}\right)\nonumber \]

    The particular solution is then

    \[ \begin{aligned} \mathbf{x}(t) &=\left(\begin{array}{cc} e^{-t} & e^{2 t} \\ -2 e^{-t} & -e^{2 t} \end{array}\right)\left(\begin{array}{cc} -1 & -1 \\ 2 & 1 \end{array}\right)\left(\begin{array}{l} 1 \\ 2 \end{array}\right) \\ &=\left(\begin{array}{cc} e^{-t} & e^{2 t} \\ -2 e^{-t} & -e^{2 t} \end{array}\right)\left(\begin{array}{c} -3 \\ 4 \end{array}\right) \\ &=\left(\begin{array}{c} -3 e^{-t}+4 e^{2 t} \\ 6 e^{-t}-4 e^{2 t} \end{array}\right) \end{aligned} \label{6.99} \]

    Thus, \(x(t)=-3 e^{-t}+4 e^{2 t}\) and \(y(t)=6 e^{-t}-4 e^{2 t}\)


    This page titled 6.7: Theory of Homogeneous Constant Coefficient Systems is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.