# 3.3: Linear systems of ODEs

- Page ID
- 364

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

First let us talk about matrix or vector valued functions. Such a function is just a matrix whose entries depend on some variable. If \(t\) is the independent variable, we write a vector valued function \( \vec {x} (t) \) as

\[ \vec {x} (t) = \begin {bmatrix} x_1(t) \\ x_2 (t) \\ \vdots \\ x_n (t) \end {bmatrix} \]

Similarly a matrix valued function \( A(t) \) is

\[ A (t) = \begin {bmatrix} a_{11} (t) & a_{12} (t) & \cdots & a_{1n} (t) \\ a_{21} (t) & a_ {22} (t) & \cdots & a_{2n} (t) \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1}(t) & a_{n2}(t) & \cdots & a_{nn}(t) \end {bmatrix} \]

We can talk about the derivative \(A'(t)\) or \( \frac {dA}{dt} \). This is just the matrix valued function whose \(ij^{th}\) entry is \(a'_{ij} (t) \).

Rules of differentiation of matrix valued functions are similar to rules for normal functions. Let \(A(t)\) and \(B(t)\) be matrix valued functions. Let \(c\) be a scalar and let \(C\) be a constant matrix. Then

\[ {(A(t) + B(t))}' = A' (t) + B' (t) \]

\[ (A(t)B(t))' = A'(t)B(t) + A(t)B'(t) \]

\[ (cA(t))' = cA' (t) \]

\[ (CA(t))' = CA'(t) \]

\[ (A(t)C)' = A' (t)C \]

A first order linear system of ODEs is a system that can be written as the vector equation

\[ \vec {x} (t) = P(t) \vec {x} (t) + \vec {f} (t) \]

where \( P(t) \) is a matrix valued function, and \( \vec {x} (t) \) and \( \vec {f} (t) \) are vector valued functions. We will often suppress the dependence on \(t\) and only write \( \vec {x} = P \vec {x} + \vec {f} \). A solution of the system is a vector valued function \( \vec {x} \) satisfying the vector equation.

For example, the equations

\[ x'_1 = 2tx_1 + e^tx_2 + t^2 \]

\[ x'_2 = \frac {x_1}{t} - x_2 + e^t \]

can be written as

\[ \vec {x'} = \begin {bmatrix} 2t & e^t \\ \frac {1}{t} & -1 \end {bmatrix} \vec {x'} + \begin {bmatrix} t^2 \\ e^t \end {bmatrix} \]

We will mostly concentrate on equations that are not just linear, but are in fact constant coefficient equations. That is, the matrix \( P\) will be constant; it will not depend on \(t\).

**Theorem 3.3.1.** (Superposition) *Let* \( \vec {x'} = P \vec {x'} \) *be a linear homogeneous system of ODEs. Suppose that* \( \vec {x}_1, \dots, \vec {x}_n \) *are* \(n\) *solutions of the equation, then*

\[ \vec {x} = c_1 \vec {x}_1 + c_2 \vec {x}_2 + \dots + c_n \vec {x}_n \]

*is also a solution. Furthermore, if this is a system of* \(n\) *equations* \( (P \rm{~is~} n \times n) \), *and *\( \vec {x}_1, \dots , \vec {x}_n \)* are linearly independent, then every solution can be written as* (3.3.12).

Linear independence for vector valued functions is the same idea as for normal functions. The vector valued functions \( \vec {x}_1, \vec {x}_2, \dots, \vec {x}_n \) are linearly independent when

\[ c_1 \vec {x}_1 + c_2 \vec {x}_2 + \dots + c_n \vec {x}_n = \vec {0} \]

has only the solution \( c_1 = c_2 = \dots = c_n = 0 \), where the equation must hold for all \(t\).

Example 3.3.1

\( \vec {x}_1 = \begin {bmatrix} t^2 \\ t \end {bmatrix}, \vec {x}_2 = \begin {bmatrix} 0 \\ {1 + t } \end {bmatrix}, \vec {x}_3 = \begin {bmatrix} -t^2 \\ 1 \end {bmatrix} \) are linearly depdendent because \( \vec {x}_1 + \vec {x}_3 = \vec {x}_2\), and this holds for all \(t\). So \(c_1 = 1, c_2 = -1\) and \(c_3 = 1\) above will work.

On the other hand if we change the example just slightly \( \vec {x}_1 = \begin {bmatrix} t^2 \\ t \end {bmatrix}, \vec {x}_2 = \begin {bmatrix} 0 \\ t \end {bmatrix}, \vec {x}_3 = \begin {bmatrix} -t^2 \\ 1 \end {bmatrix} \), then the functions are linearly independent. First write \( c_1 \vec {x}_1 + c_2 \vec {x}_2 + c_3 \vec {x}_3 = \vec {0} \) and note that it has to hold for all \(t\). We get that

\( c_1 \vec {x}_1 + c_2 \vec {x}_2 + c_3 \vec {x}_3 = \begin {bmatrix} c_1t^2 - c_3t^3 \\ c_1t + c_2t + c_3 \end {bmatrix} = \begin {bmatrix} 0 \\ 0 \end {bmatrix} \)

In other words \( c_1t^2 - c_3t^3 = 0 \) and \(c_1t + c_2t + c_3 = 0 \). If we set \(t = 0\), then the second equation becomes \(c_3 = 0 \). However, the first equation becomes \(c_1t^2 = 0\) for all \(t\) and so \(c_1 = 0 \). Thus the second equation is just \(c_2t = 0\), which means \(c_2 = 0\). So \(c_1 = c_2 = c_3 = 0 \) is the only solution and \( \vec {x}_1, \vec {x}_2 \) and \(\vec {x}_3\) are linearly independent.

\[ X (t) \vec {c} \]

where \( X (t) \) is the matrix with columns \(\vec {x}_1, \dots , \vec {x}_n \), and \( \vec {c} \) is the column vector with entries \( c_1, \dots , c_n \). The matrix valued function \( X (t) \) is called the* fundamental matrix*, or the fundamental matrix solution.

To solve nonhomogeneous first order linear systems, we use the same technique as we applied to solve single linear nonhomogeneous equations.

**Theorem 3.3.2. ***Let* \( \vec {x}' = P \vec {x} + \vec {f} \) *be a linear system of ODEs. Suppose* \( \vec {x}_p\)* is one particular solution. Then every solution can be written as*

\[ \vec {x} = \vec {x}_c + \vec {x}_p \]

*where* \( \vec {x}_c \) *is a solution to the associated homogeneous equation* \( (\vec {x} = P \vec {x}) \).

So the procedure will be the same as for single equations. We find a particular solution to the nonhomogeneous equation, then we find the general solution to the associated homogeneous equation, and finally we add the two together.

Alright, suppose you have found the general solution \( \vec {x}' = P \vec {x} + \vec {f} \). Now you are given an initial condition of the form \( \vec {x} {t_0} = \vec {b} \) for some constant vector \( \vec {b} \). Suppose that \( X (t) \) is the fundamental matrix solution of the associated homogeneous equation (i.e. columns of \( X (t) \) are solutions). The general solution can be written as

\[ \vec {x} (t) = X (t) \vec {c} + \vec {x}_p (t) \]

We are seeking a vector \(\vec {c} \) such that

\[ \vec {b} = \vec {x} (t_0) = X (t_0) \vec {c} + \vec {x}_p (t_0) \]

In other words, we are solving for \( \vec {c} \) the nonhomogeneous system of linear equations

\[ X(t_0) \vec {c} = \vec {b} - \vec {x}_p (t_0) \]

Example 3.3.2

In § 3.1 we solved the system

\[ x_1' = x_1 \]

\[ x'_2 = x_1 - x_2 \]

with initial conditions \( x_1(0) = 1, x_2 (0) = 2\).

**Solution**

This is a homogeneous system, so \( \vec {f} (t) = \vec {0} \). We write the system and the initial conditions as

\[ \vec {x} ' = \begin {bmatrix} 1 & 0 \\ 1 & -1 \end {bmatrix} \vec{x}, ~~~~ \vec{x}(0) = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \]

We found the general solution was \( x_1= C_1 e^t \) and \(x_2=\frac{c_1}{2} e^t + c_2 e^{-t} \). Letting \( C_1=1 \) and \(C_2=0\), we obtain the solution \( \begin{bmatrix} e^t \\ \frac{1}{2}e^t \end{bmatrix} \). Letting \( C_1=0 \) and \(C_2=1\), we obtain \( \begin{bmatrix} 0 \\ e^{-t} \end{bmatrix} \). These two solutions are linearly independent, as can be seen by setting \( t=0 \), and noting that the resulting constant vectors are linearly independent. In matrix notation, the fundamental matrix solution is, therefore,

\[ \vec{X} = \begin{bmatrix} e^t & 0 \\ \frac{1}{2}e^t & e^{-t} \end{bmatrix} \]

Hence to solve the initial problem we solve the equation

\[ X(0)\vec(c) = \vec{b} \]

or in other words,

\[ \begin{bmatrix} 1 & 0 \\ \frac{1}{2} & 1 \end{bmatrix} \vec{c} =\begin{bmatrix} 1 \\ 2 \end{bmatrix} \]

After a single elementary row operation we find that \( \vec{c} = \begin{bmatrix} 1 \\ \frac{3}{2} \end{bmatrix} \) . Hence our solution is \( \begin{bmatrix} 1&0 \\ \frac{1}{2} & 1 \end{bmatrix} \vec{ C } \begin{bmatrix} 1 \\ 2 \end{bmatrix} \)

\[ \vec{x}(t) = X(t) \vec{c} = \begin{bmatrix} e^t & 0 \\ \frac{1}{2}e^{t} & e^{-t} \end{bmatrix} \begin{bmatrix} 1 \\ \frac{3}{2} \end{bmatrix} = \begin{bmatrix} e^t \\ \frac{1}{2} e^t + \frac{3}{2} e^{-t} \end{bmatrix} \]

This agrees with our previous solution.

## Contributors

- Jiří Lebl (Oklahoma State University).These pages were supported by NSF grants DMS-0900885 and DMS-1362337.