Skip to main content
Mathematics LibreTexts

3.3: Linear systems of ODEs

  • Page ID
    32194
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    First let us talk about matrix or vector valued functions. Such a function is just a matrix whose entries depend on some variable. If \(t\) is the independent variable, we write a vector valued function \( \vec {x} (t) \) as

    \[ \vec {x} (t) = \begin {bmatrix} x_1(t) \\ x_2 (t) \\ \vdots \\ x_n (t) \end {bmatrix} \nonumber \]

    Similarly a matrix valued function \( A(t) \) is

    \[ A (t) = \begin {bmatrix} a_{11} (t) & a_{12} (t) & \cdots & a_{1n} (t) \\ a_{21} (t) & a_ {22} (t) & \cdots & a_{2n} (t) \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1}(t) & a_{n2}(t) & \cdots & a_{nn}(t) \end {bmatrix} \nonumber \]

    We can talk about the derivative \(A'(t)\) or \( \frac {dA}{dt} \). This is just the matrix valued function whose \(ij^{th}\) entry is \(a'_{ij} (t) \).

    Rules of differentiation of matrix valued functions are similar to rules for normal functions. Let \(A(t)\) and \(B(t)\) be matrix valued functions. Let \(c\) be a scalar and let \(C\) be a constant matrix. Then

    \[\begin{align}\begin{aligned} {(A(t) + B(t))}' &= A' (t) + B' (t) \\ (A(t)B(t))' &= A'(t)B(t) + A(t)B'(t) \\ (cA(t))' &= cA' (t) \\ (CA(t))' &= CA'(t) \\ (A(t)C)' &= A' (t)C \end{aligned}\end{align} \nonumber \]

    Note the order of the multiplication in the last two expressions.

    A first order linear system of ODEs is a system that can be written as the vector equation

    \[ \vec {x} (t) = P(t) \vec {x} (t) + \vec {f} (t) \nonumber \]

    where \( P(t) \) is a matrix valued function, and \( \vec {x} (t) \) and \( \vec {f} (t) \) are vector valued functions. We will often suppress the dependence on \(t\) and only write \( \vec {x} = P \vec {x} + \vec {f} \). A solution of the system is a vector valued function \( \vec {x} \) satisfying the vector equation.

    For example, the equations

    \[\begin{align}\begin{aligned} x'_1 &= 2tx_1 + e^tx_2 + t^2 \\ x'_2 &= \frac {x_1}{t} - x_2 + e^t \end{aligned}\end{align} \nonumber \]

    can be written as

    \[ \vec {x'} = \begin {bmatrix} 2t & e^t \\ \frac {1}{t} & -1 \end {bmatrix} \vec {x'} + \begin {bmatrix} t^2 \\ e^t \end {bmatrix} \nonumber \]

    We will mostly concentrate on equations that are not just linear, but are in fact constant coefficient equations. That is, the matrix \( P\) will be constant; it will not depend on \(t\).

    When \( \vec {f} = \vec {0} \) (the zero vector), then we say the system is homogeneous. For homogeneous linear systems we have the principle of superposition, just like for single homogeneous equations.

    Theorem \(\PageIndex{1}\)

    Superposition

    Let \( \vec {x'} = P \vec {x'} \) be a linear homogeneous system of ODEs. Suppose that \( \vec {x}_1, \dots, \vec {x}_n \) are \(n\) solutions of the equation, then

    \[ \vec {x} = c_1 \vec {x}_1 + c_2 \vec {x}_2 + \dots + c_n \vec {x}_n \nonumber \]

    is also a solution. Furthermore, if this is a system of \(n\) equations \( (P \rm{~is~} n \times n) \), and \( \vec {x}_1, \dots , \vec {x}_n \) are linearly independent, then every solution can be written as \(\eqref{eq:12}\).

    Linear independence for vector valued functions is the same idea as for normal functions. The vector valued functions \( \vec {x}_1, \vec {x}_2, \dots, \vec {x}_n \) are linearly independent when

    \[ \label{eq:12}c_1 \vec {x}_1 + c_2 \vec {x}_2 + \dots + c_n \vec {x}_n = \vec {0} \]

    has only the solution \( c_1 = c_2 = \dots = c_n = 0 \), where the equation must hold for all \(t\).

    Example 3.3.1

    \( \vec {x}_1 = \begin {bmatrix} t^2 \\ t \end {bmatrix}, \vec {x}_2 = \begin {bmatrix} 0 \\ {1 + t } \end {bmatrix}, \vec {x}_3 = \begin {bmatrix} -t^2 \\ 1 \end {bmatrix} \) are linearly depdendent because \( \vec {x}_1 + \vec {x}_3 = \vec {x}_2\), and this holds for all \(t\). So \(c_1 = 1, c_2 = -1\) and \(c_3 = 1\) above will work.

    On the other hand if we change the example just slightly \( \vec {x}_1 = \begin {bmatrix} t^2 \\ t \end {bmatrix}, \vec {x}_2 = \begin {bmatrix} 0 \\ t \end {bmatrix}, \vec {x}_3 = \begin {bmatrix} -t^2 \\ 1 \end {bmatrix} \), then the functions are linearly independent. First write \( c_1 \vec {x}_1 + c_2 \vec {x}_2 + c_3 \vec {x}_3 = \vec {0} \) and note that it has to hold for all \(t\). We get that

    \( c_1 \vec {x}_1 + c_2 \vec {x}_2 + c_3 \vec {x}_3 = \begin {bmatrix} c_1t^2 - c_3t^3 \\ c_1t + c_2t + c_3 \end {bmatrix} = \begin {bmatrix} 0 \\ 0 \end {bmatrix} \)

    In other words \( c_1t^2 - c_3t^3 = 0 \) and \(c_1t + c_2t + c_3 = 0 \). If we set \(t = 0\), then the second equation becomes \(c_3 = 0 \). However, the first equation becomes \(c_1t^2 = 0\) for all \(t\) and so \(c_1 = 0 \). Thus the second equation is just \(c_2t = 0\), which means \(c_2 = 0\). So \(c_1 = c_2 = c_3 = 0 \) is the only solution and \( \vec {x}_1, \vec {x}_2 \) and \(\vec {x}_3\) are linearly independent.

    The linear combination \( c_1 \vec {x}_1 + c_2 \vec {x}_2 + \dots + c_n \vec {x}_n \) could always be written as

    \[ X (t) \vec {c} \nonumber \]

    where \( X (t) \) is the matrix with columns \(\vec {x}_1, \dots , \vec {x}_n \), and \( \vec {c} \) is the column vector with entries \( c_1, \dots , c_n \). The matrix valued function \( X (t) \) is called the fundamental matrix, or the fundamental matrix solution.

    To solve nonhomogeneous first order linear systems, we use the same technique as we applied to solve single linear nonhomogeneous equations.

    Theorem \(\PageIndex{2}\)

    Let \( \vec {x}' = P \vec {x} + \vec {f} \) be a linear system of ODEs. Suppose \( \vec {x}_p\) is one particular solution. Then every solution can be written as

    \[ \vec {x} = \vec {x}_c + \vec {x}_p \nonumber \]

    where \( \vec {x}_c \) is a solution to the associated homogeneous equation \( (\vec {x} = P \vec {x}) \).

    So the procedure will be the same as for single equations. We find a particular solution to the nonhomogeneous equation, then we find the general solution to the associated homogeneous equation, and finally we add the two together.

    Alright, suppose you have found the general solution \( \vec {x}' = P \vec {x} + \vec {f} \). Now you are given an initial condition of the form \[ \vec {x} {t_0} = \vec {b} \nonumber \] for some constant vector \( \vec {b} \). Suppose that \( X (t) \) is the fundamental matrix solution of the associated homogeneous equation (i.e. columns of \( X (t) \) are solutions). The general solution can be written as

    \[ \vec {x} (t) = X (t) \vec {c} + \vec {x}_p (t) \nonumber \]

    We are seeking a vector \(\vec {c} \) such that

    \[ \vec {b} = \vec {x} (t_0) = X (t_0) \vec {c} + \vec {x}_p (t_0) \nonumber \]

    In other words, we are solving for \( \vec {c} \) the nonhomogeneous system of linear equations

    \[ X(t_0) \vec {c} = \vec {b} - \vec {x}_p (t_0) \nonumber \]

    Example 3.3.2

    In Section 3.1 we solved the system

    \[\begin{align}\begin{aligned} x_1' &= x_1 \\ x'_2 &= x_1 - x_2 \end{aligned}\end{align} \nonumber \]

    with initial conditions \( x_1(0) = 1, x_2 (0) = 2\).

    Solution

    This is a homogeneous system, so \( \vec {f} (t) = \vec {0} \). We write the system and the initial conditions as

    \[ \vec {x} ' = \begin {bmatrix} 1 & 0 \\ 1 & -1 \end {bmatrix} \vec{x}, \quad \vec{x}(0) = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \nonumber \]

    We found the general solution was \( x_1= C_1 e^t \) and \(x_2=\frac{c_1}{2} e^t + c_2 e^{-t} \). Letting \( C_1=1 \) and \(C_2=0\), we obtain the solution \( \begin{bmatrix} e^t \\ \frac{1}{2}e^t \end{bmatrix} \). Letting \( C_1=0 \) and \(C_2=1\), we obtain \( \begin{bmatrix} 0 \\ e^{-t} \end{bmatrix} \). These two solutions are linearly independent, as can be seen by setting \( t=0 \), and noting that the resulting constant vectors are linearly independent. In matrix notation, the fundamental matrix solution is, therefore,

    \[ X(t) = \begin{bmatrix} e^t & 0 \\ \frac{1}{2}e^t & e^{-t} \end{bmatrix} \nonumber \]

    Hence to solve the initial problem we solve the equation

    \[ X(0)\vec(c) = \vec{b} \nonumber \]

    or in other words,

    \[ \begin{bmatrix} 1 & 0 \\ \frac{1}{2} & 1 \end{bmatrix} \vec{c} =\begin{bmatrix} 1 \\ 2 \end{bmatrix} \nonumber \]

    \[ \vec{x}(t) = X(t) \vec{c} = \begin{bmatrix} e^t & 0 \\ \frac{1}{2}e^{t} & e^{-t} \end{bmatrix} \begin{bmatrix} 1 \\ \frac{3}{2} \end{bmatrix} = \begin{bmatrix} e^t \\ \frac{1}{2} e^t + \frac{3}{2} e^{-t} \end{bmatrix} \nonumber \]

    This agrees with our previous solution from Section 3.1.


    This page titled 3.3: Linear systems of ODEs is shared under a not declared license and was authored, remixed, and/or curated by Jiří Lebl.

    • Was this article helpful?