3.5: An Application to Systems of Differential Equations
- Page ID
- 58845
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)A function \(f\) of a real variable is said to be differentiable if its derivative exists and, in this case, we let \(f^{\prime}\) denote the derivative. If \(f\) and \(g\) are differentiable functions, a system
\[\begin{aligned} f^{\prime} &= 3f + 5g \\ g^{\prime} &= -f + 2g \end{aligned} \nonumber \]
is called a system of first order differential equations, or a differential system for short. Solving many practical problems often comes down to finding sets of functions that satisfy such a system (often involving more than two functions). In this section we show how diagonalization can help. Of course an acquaintance with calculus is required.
The Exponential Function
The simplest differential system is the following single equation:
\[\label{eq:diffeq} f^{\prime} = af \mbox{ where } a \mbox{ is constant} \]
It is easily verified that \(f(x) = e ^{ax}\) is one solution; in fact, Equation [eq:diffeq] is simple enough for us to find all solutions. Suppose that \(f\) is any solution, so that \(f^{\prime}(x) = af(x)\) for all \(x\). Consider the new function \(g\) given by \(g(x) = f(x)e^{-ax}\). Then the product rule of differentiation gives
\[\begin{aligned} g^{\prime}(x) & = f(x) \left[ -ae^{-ax} \right] + f^{\prime}(x) e^{-ax} \\ &= -af(x)e^{-ax} + \left[ af(x)\right] e^{-ax} \\ &= 0\end{aligned} \nonumber \]
for all \(x\). Hence the function \(g(x)\) has zero derivative and so must be a constant, say \(g(x) = c\). Thus \(c = g(x) = f(x)e^{-ax}\), that is
\[f(x) = ce^{ax} \nonumber \]
In other words, every solution \(f(x)\) of Equation [eq:diffeq] is just a scalar multiple of \(e^{ax}\). Since every such scalar multiple is easily seen to be a solution of Equation [eq:diffeq], we have proved
010427 The set of solutions to \(f^{\prime}= af\) is \(\{ce^{ax} \mid c \mbox{ any constant}\} = \mathbb{R} e^{ax}\).
Remarkably, this result together with diagonalization enables us to solve a wide variety of differential systems.
010433 Assume that the number \(n(t)\) of bacteria in a culture at time \(t\) has the property that the rate of change of \(n\) is proportional to \(n\) itself. If there are \(n_{0}\) bacteria present when \(t = 0\), find the number at time \(t\).
Let \(k\) denote the proportionality constant. The rate of change of \(n(t)\) is its time-derivative \(n^{\prime}(t)\), so the given relationship is \(n^{\prime}(t) = kn(t)\). Thus Theorem [thm:010427] shows that all solutions \(n\) are given by \(n(t) = ce^{kt}\), where \(c\) is a constant. In this case, the constant \(c\) is determined by the requirement that there be \(n_{0}\) bacteria present when \(t = 0\). Hence \(n_{0} = n(0) = ce^{k0} = c\), so
\[n(t) = n_0 e^{kt} \nonumber \]
gives the number at time \(t\). Of course the constant \(k\) depends on the strain of bacteria.
The condition that \(n(0) = n_{0}\) in Example [exa:010433] is called an initial condition or a boundary condition and serves to select one solution from the available solutions.
General Differential Systems
Solving a variety of problems, particularly in science and engineering, comes down to solving a system of linear differential equations. Diagonalization enters into this as follows. The general problem is to find differentiable functions \(f_{1}, f_{2}, \dots , f_{n}\) that satisfy a system of equations of the form
\[ \begin{array}{ccccccc} f_1^{\prime} & = & a_{11}f_1 &+& a_{12}f_2 & + \cdots + & a_{1n}f_n \\ f_2^{\prime} & = & a_{21}f_1 &+& a_{22}f_2 & + \cdots + & a_{2n}f_n \\ \vdots & & \vdots & & \vdots & & \vdots \\ f_n^{\prime} & = & a_{n1}f_1 &+& a_{n2}f_2 & + \cdots + & a_{nn}f_n \\ \end{array} \nonumber \]
where the \(a_{ij}\) are constants. This is called a linear system of differential equations or simply a differential system. The first step is to put it in matrix form. Write
\[\mathbf{f} = \left[ \begin{array}{c} f_1 \\ f_2 \\ \vdots \\ f_n \end{array}\right] \quad \mathbf{f}^{\prime} = \left[ \begin{array}{c} f_1^{\prime} \\ f_2^{\prime} \\ \vdots \\ f_n ^{\prime} \end{array}\right] \quad A = \left[ \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{array}\right] \nonumber \]
Then the system can be written compactly using matrix multiplication:
\[\mathbf{f}^{\prime} = A \mathbf{f} \nonumber \]
Hence, given the matrix \(A\), the problem is to find a column \(\mathbf{f}\) of differentiable functions that satisfies this condition. This can be done if \(A\) is diagonalizable. Here is an example.
010460 Find a solution to the system
\[\begin{array}{ccr} f_1^{\prime} &= &f_1 + 3f_2 \\ f_2^{\prime} &= &2f_1 + 2f_2 \end{array} \nonumber \]
that satisfies \(f_{1}(0) = 0\), \(f_{2}(0) = 5\).
This is \(\mathbf{f}^{\prime} = A\mathbf{f}\), where \(\mathbf{f} = \left[ \begin{array}{c} f_1\\ f_2\end{array}\right]\) and \(A = \left[ \begin{array}{rr} 1 & 3 \\ 2 & 2 \end{array}\right]\). The reader can verify that \(c_{A}(x) = (x - 4)(x + 1)\), and that \(\mathbf{x}_1 = \left[ \begin{array}{c} 1\\ 1\end{array}\right]\) and \(\mathbf{x}_2 = \left[ \begin{array}{c} 3\\ -2\end{array}\right]\) are eigenvectors corresponding to the eigenvalues \(4\) and \(-1\), respectively. Hence the diagonalization algorithm gives \(P^{-1}AP = \left[ \begin{array}{rr} 4 & 0 \\ 0 & -1 \end{array}\right]\), where \(P = \left[ \begin{array}{cc} \mathbf{x}_1 & \mathbf{x}_2 \end{array}\right] = \left[ \begin{array}{rr} 1 & 3 \\ 1 & -2 \end{array}\right]\). Now consider new functions \(g_{1}\) and \(g_{2}\) given by \(\mathbf{f} = P\mathbf{g}\) (equivalently, \(\mathbf{g} = P^{-1} \mathbf{f}\) ), where \(\mathbf{g} = \left[ \begin{array}{c} g_1\\ g_2\end{array}\right]\) Then
\[\left[ \begin{array}{c} f_1\\ f_2\end{array}\right] = \left[ \begin{array}{rr} 1 & 3 \\ 1 & -2 \end{array}\right] \left[ \begin{array}{c} g_1\\ g_2 \end{array}\right] \quad \mbox{ that is, } \begin{array}{l} f_1 = g_1 + 3g_2 \\ f_2 = g_1 - 2g_2 \end{array} \nonumber \]
Hence \(f_1^{\prime} = g_1^{\prime} + 3g_2^{\prime}\) and \(f_2^{\prime} = g_1^{\prime}- 2g_2^{\prime}\) so that
\[\mathbf{f}^{\prime} = \left[ \begin{array}{c} f_1^{\prime} \\ f_2^{\prime} \end{array} \right] = \left[ \begin{array}{rr} 1 & 3 \\ 1 & -2 \end{array}\right] \left[ \begin{array}{c} g_1^{\prime} \\ g_2^{\prime} \end{array} \right] = P \mathbf{g}^{\prime} \nonumber \]
If this is substituted in \(\mathbf{f}^{\prime} = A\mathbf{f}\), the result is \(P\mathbf{g}^{\prime} = AP\mathbf{g}\), whence
\[\mathbf{g}^{\prime} = P^{-1}AP \mathbf{g} \nonumber \]
But this means that
\[\left[ \begin{array}{c} g_1^{\prime} \\ g_2^{\prime} \end{array} \right] = \left[ \begin{array}{rr} 4 & 0 \\ 0 & -1 \end{array}\right] \left[ \begin{array}{c} g_1 \\ g_2 \end{array} \right], \quad \mbox{ so } \begin{array}{l} g_1^{\prime} = 4g_1 \\ g_2^{\prime} = -g_2 \end{array} \nonumber \]
Hence Theorem [thm:010427] gives \(g_{1}(x) = ce^{4x}\), \(g_{2}(x) = de^{-x}\), where \(c\) and \(d\) are constants. Finally, then,
\[\left[ \begin{array}{c} f_1(x) \\ f_2(x) \end{array}\right] = P \left[ \begin{array}{c} g_1(x) \\ g_2(x) \end{array}\right] = \left[ \begin{array}{rr} 1 & 3 \\ 1 & -2 \end{array}\right] \left[ \begin{array}{r} ce^{4x} \\ de^{-x} \end{array}\right] = \left[ \begin{array}{r} ce^{4x} + 3de^{-x}\\ ce^{4x} -2de^{-x} \end{array}\right] \nonumber \]
so the general solution is
\[\begin{array}{lrr} f_1(x) & = &ce^{4x} + 3de^{-x} \\ f_2(x) & = &ce^{4x} - 2de^{-x} \end{array} \quad c \mbox{ and } d \mbox{ constants} \nonumber \]
It is worth observing that this can be written in matrix form as
\[\left[ \begin{array}{c} f_1(x) \\ f_2(x) \end{array}\right] = c \left[ \begin{array}{c} 1 \\ 1 \end{array}\right] e^{4x} + d \left[ \begin{array}{r} 3 \\ -2 \end{array}\right] e^{-x} \nonumber \]
That is,
\[\mathbf{f}(x) = c\mathbf{x}_1 e^{4x} + d \mathbf{x}_2 e^{-x} \nonumber \]
This form of the solution works more generally, as will be shown.
Finally, the requirement that \(f_{1}(0) = 0\) and \(f_{2}(0) = 5\) in this example determines the constants \(c\) and \(d\):
\[\begin{aligned} 0 & = f_1(0) = ce^0 + 3de^0 = c+3d \\ 5 & = f_2(0) = ce^0 - 2de^0 = c-2d\end{aligned} \nonumber \]
These equations give \(c = 3\) and \(d = -1\), so
\[\begin{aligned} f_1(x) &= 3e^{4x}- 3e^{-x} \\ f_2(x) &= 3e^{4x} +2e^{-x}\end{aligned} \nonumber \]
satisfy all the requirements.
The technique in this example works in general.
010514 Consider a linear system
\[\mathbf{f}^{\prime} = A \mathbf{f} \nonumber \]
of differential equations, where \(A\) is an \(n \times n\) diagonalizable matrix. Let \(P^{-1}AP\) be diagonal, where \(P\) is given in terms of its columns
\[P = \left[ \mathbf{x}_1,\mathbf{x}_2, \cdots, \mathbf{x}_n \right] \nonumber \]
and \(\{\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{n}\}\) are eigenvectors of A. If \(\mathbf{x}_{i}\) corresponds to the eigenvalue \(\lambda_{i}\) for each i, then every solution \(\mathbf{f}\) of \(\mathbf{f}{}^{\prime} = A\mathbf{f}\) has the form
\[\mathbf{f}(x) = c_1\mathbf{x}_1 e^{\lambda_1x} +c_2\mathbf{x}_2 e^{\lambda_2x} + \cdots + c_n\mathbf{x}_n e^{\lambda_nx} \nonumber \]
where \(c_{1}, c_{2}, \dots , c_{n}\) are arbitrary constants.
By Theorem [thm:009214], the matrix \(P = \left[ \begin{array}{cccc} \mathbf{x}_{1} & \mathbf{x}_{2} & \dots & \mathbf{x}_{n} \end{array} \right]\) is invertible and
\[P^{-1}AP = \left[ \begin{array}{cccc} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{array}\right] \nonumber \]
As in Example [exa:010460], write \(\mathbf{f} = \left[ \begin{array}{c} f_1\\ f_2\\ \vdots \\ f_n \end{array}\right]\) and define \(\mathbf{g} = \left[ \begin{array}{c} g_1\\ g_2\\ \vdots \\ g_n \end{array}\right]\) by \(\mathbf{g} = P^{-1}\mathbf{f}\); equivalently, \(\mathbf{f} = P\mathbf{g}\). If \(P = \left[ p_{ij}\right]\), this gives
\[f_i = p_{i1}g_1 + p_{i2}g_2 + \cdots + p_{in}g_n \nonumber \]
Since the \(p_{ij}\) are constants, differentiation preserves this relationship:
\[f_i^{\prime} = p_{i1}g_1^{\prime} + p_{i2}g_2^{\prime} + \cdots + p_{in}g_n^{\prime} \nonumber \]
so \(\mathbf{f}^{\prime} = P\mathbf{g}^{\prime}\). Substituting this into \(\mathbf{f}^{\prime} = A\mathbf{f}\) gives \(P\mathbf{g}^{\prime} = AP\mathbf{g}\). But then left multiplication by \(P^{-1}\) gives \(\mathbf{g}^{\prime} = P^{-1}AP\mathbf{g}\), so the original system of equations \(\mathbf{f}^{\prime} = A\mathbf{f}\) for \(\mathbf{f}\) becomes much simpler in terms of \(\mathbf{g}\):
\[\left[ \begin{array}{c} g_1^{\prime}\\ g_2^{\prime}\\ \vdots \\ g_n^{\prime} \end{array}\right] = \left[ \begin{array}{cccc} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{array}\right] \left[ \begin{array}{c} g_1\\ g_2\\ \vdots \\ g_n \end{array}\right] \nonumber \]
Hence \(g_{i}^{\prime} = \lambda_{i}g_{i}\) holds for each \(i\), and Theorem [thm:010427] implies that the only solutions are
\[g_i(x) = c_i e^{\lambda_i x} \quad c_i \mbox{ some constant} \nonumber \]
Then the relationship \(\mathbf{f} = P\mathbf{g}\) gives the functions \(f_{1}, f_{2}, \dots , f_{n}\) as follows:
\[\mathbf{f}(x) = \left[ \mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n \right] \left[ \begin{array}{c} c_1 e^{\lambda_1 x}\\ c_2 e^{\lambda_2 x}\\ \vdots \\ c_n e^{\lambda_n x}\\ \end{array}\right] = c_1 \mathbf{x}_1 e^{\lambda_1 x} + c_2 \mathbf{x}_2 e^{\lambda_2 x} +\cdots + c_n \mathbf{x}_n e^{\lambda_n x} \nonumber \]
This is what we wanted.
The theorem shows that every solution to \(\mathbf{f}^{\prime} = A\mathbf{f}\) is a linear combination
\[\mathbf{f}(x) = c_1 \mathbf{x}_1 e^{\lambda_1 x} + c_2 \mathbf{x}_2 e^{\lambda_2 x} +\cdots + c_n \mathbf{x}_n e^{\lambda_n x} \nonumber \]
where the coefficients \(c_{i}\) are arbitrary. Hence this is called the general solution to the system of differential equations. In most cases the solution functions \(f_{i}(x)\) are required to satisfy boundary conditions, often of the form \(f_{i}(a) = b_{i}\), where \(a, b_{1}, \dots , b_{n}\) are prescribed numbers. These conditions determine the constants \(c_{i}\). The following example illustrates this and displays a situation where one eigenvalue has multiplicity greater than 1.
010568 Find the general solution to the system
\[ \begin{array}{rrrrrrr} f_1^{\prime} & = & 5f_1 & + & 8f_2 & + & 16f_3 \\ f_2^{\prime} & = & 4f_1 & + & f_2 & + & 8f_3 \\ f_3^{\prime} & = & -4f_1 & - & 4f_2 & - & 11f_3 \end{array} \nonumber \]
Then find a solution satisfying the boundary conditions \(f_{1}(0) = f_{2}(0) = f_{3}(0) = 1\).
The system has the form \(\mathbf{f}^{\prime} = A\mathbf{f}\), where \(A = \left[ \begin{array}{rrr} 5 & 8 & 16 \\ 4 & 1 & 8 \\ -4 & -4 & -11 \end{array}\right]\). In this case \(c_{A}(x) = (x + 3)^2(x - 1)\) and eigenvectors corresponding to the eigenvalues \(-3\), \(-3\), and \(1\) are, respectively,
\[\mathbf{x}_1 = \left[ \begin{array}{r} -1 \\ 1 \\ 0 \end{array}\right] \quad \mathbf{x}_2 = \left[ \begin{array}{r} -2 \\ 0 \\ 1 \end{array}\right] \quad \mathbf{x}_3 = \left[ \begin{array}{r} 2 \\ 1 \\ -1 \end{array}\right] \nonumber \]
Hence, by Theorem [thm:010514], the general solution is
\[\mathbf{f}(x) = c_1\left[ \begin{array}{r} -1 \\ 1 \\ 0 \end{array}\right] e^{-3x} + c_2\left[ \begin{array}{r} -2 \\ 0 \\ 1 \end{array}\right] e^{-3x} + c_3\left[ \begin{array}{r} 2 \\ 1 \\ -1 \end{array}\right] e^x, \quad c_i \mbox{ constants.} \nonumber \]
The boundary conditions \(f_{1}(0) = f_{2}(0) = f_{3}(0) = 1\) determine the constants \(c_{i}\).
\[\begin{aligned} \left[ \begin{array}{c} 1 \\ 1 \\ 1 \end{array}\right] = \mathbf{f}(0) &= c_1\left[ \begin{array}{r} -1 \\ 1 \\ 0 \end{array}\right] + c_2\left[ \begin{array}{r} -2 \\ 0 \\ 1 \end{array}\right] + c_3\left[ \begin{array}{r} 2 \\ 1 \\ -1 \end{array}\right] \\ &= \left[ \begin{array}{rrr} -1 & -2 & 2 \\ 1 & 0 & 1 \\ 0 & 1 & -1 \end{array}\right] \left[ \begin{array}{c} c_1 \\ c_2 \\ c_3 \end{array}\right]\end{aligned} \nonumber \]
The solution is \(c_{1} = -3\), \(c_{2} = 5\), \(c_{3} = 4\), so the required specific solution is
\[ \begin{array}{rrrr} f_1(x) = & -7e^{-3x} & + & 8e^x \\ f_2(x) = & -3e^{-3x} & + & 4e^x \\ f_3(x) = & 5e^{-3x} & - & 4e^x \end{array} \nonumber \]


