Skip to main content
Mathematics LibreTexts

10.3: Solution by the Matrix Exponential

  • Page ID
    96186
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Another interesting approach to this problem makes use of the matrix exponential. Let \(\mathrm{A}\) be a square matrix, \(t \mathrm{~A}\) the matrix A multiplied by the scalar \(t\), and \(\mathrm{A}^{\mathrm{n}}\) the matrix A multiplied by itself \(n\) times. We define the matrix exponential function \(e^{t \mathrm{~A}}\) similar to the way the exponential function may be defined using its Taylor series. The corresponding definition is

    \[e^{t \mathrm{~A}}=\mathrm{I}+t \mathrm{~A}+\frac{t^{2} \mathrm{~A}^{2}}{2 !}+\frac{t^{3} \mathrm{~A}^{3}}{3 !}+\frac{t^{4} \mathrm{~A}^{4}}{4 !}+\ldots \nonumber \]

    We can differentiate \(e^{t \mathrm{~A}}\) with respect to \(t\) and obtain

    \[\begin{aligned} \frac{d}{d t} e^{t \mathrm{~A}} &=\mathrm{A}+t \mathrm{~A}^{2}+\frac{t^{2} \mathrm{~A}^{3}}{2 !}+\frac{t^{3} \mathrm{~A}^{4}}{3 !}+\ldots \\ &=\mathrm{A}\left(\mathrm{I}+t \mathrm{~A}+\frac{t^{2} \mathrm{~A}^{2}}{2 !}+\frac{t^{3} \mathrm{~A}^{3}}{3 !}+\ldots\right) \\ &=\mathrm{A} e^{t \mathrm{~A}} \end{aligned} \nonumber \]

    as one would expect from differentiating the exponential function. We can therefore formally write the solution of

    \[\dot{\mathrm{x}}=\mathrm{Ax} \nonumber \]

    \[\mathrm{x}(t)=e^{t \mathrm{~A}} \mathrm{x}(0) \nonumber \]

    If the matrix \(A\) is diagonalizable such that \(A=S \Lambda S^{-1}\), then observe that

    \[\begin{aligned} e^{t \mathrm{~A}} &=e^{t \mathrm{~S} \Lambda \mathrm{S}^{-1}} \\ &=\mathrm{I}+t \mathrm{~S} \Lambda \mathrm{S}^{-1}+\frac{t^{2}\left(\mathrm{~S} \Lambda \mathrm{S}^{-1}\right)^{2}}{2 !}+\frac{t^{3}\left(\mathrm{~S} \Lambda \mathrm{S}^{-1}\right)^{3}}{3 !}+\ldots \\ &=\mathrm{I}+t \mathrm{~S} \Lambda \mathrm{S}^{-1}+\frac{t^{2} \mathrm{~S} \Lambda^{2} \mathrm{~S}^{-1}}{2 !}+\frac{t^{3} \mathrm{~S} \Lambda^{3} \mathrm{~S}^{-1}}{3 !}+\ldots \\ &=\mathrm{S}\left(\mathrm{I}+t \Lambda+\frac{t^{2} \Lambda^{2}}{2 !}+\frac{t^{3} \Lambda^{3}}{3 !}+\ldots\right) \mathrm{S}^{-1} \\ &=\mathrm{Se}^{t \Lambda} \mathrm{S}^{-1} \end{aligned} \nonumber \]

    If \(\Lambda\) is a diagonal matrix with diagonal elements \(\lambda_{1}, \lambda_{2}\), etc., then the matrix exponential \(e^{t \Lambda}\) is also a diagonal matrix with diagonal elements given by \(e^{\lambda_{1} t}, e^{\lambda_{2} t}\), etc. We can now use the matrix exponential to solve a system of linear differential equations.

    Example: Solve the previous example

    \[\frac{d}{d t}\left(\begin{array}{l} x_{1} \\ x_{2} \end{array}\right)=\left(\begin{array}{ll} 1 & 1 \\ 4 & 1 \end{array}\right)\left(\begin{array}{l} x_{1} \\ x_{2} \end{array}\right) \nonumber \]

    by matrix exponentiation.

    We know that

    \[\Lambda=\left(\begin{array}{rr} 3 & 0 \\ 0 & -1 \end{array}\right), \quad S=\left(\begin{array}{rr} 1 & 1 \\ 2 & -2 \end{array}\right), \quad S^{-1}=-\frac{1}{4}\left(\begin{array}{rr} -2 & -1 \\ -2 & 1 \end{array}\right) \text {. } \nonumber \]

    The solution to the system of differential equations is then given by

    \[\begin{aligned} \mathrm{x}(t) &=e^{t \mathrm{~A}} \mathrm{x}(0) \\ &=\mathrm{S}^{t \Lambda} \mathrm{S}^{-1} \mathrm{x}(0) \\ &=-\frac{1}{4}\left(\begin{array}{rr} 1 & 1 \\ 2 & -2 \end{array}\right)\left(\begin{array}{rr} e^{3 t} & 0 \\ 0 & e^{-t} \end{array}\right)\left(\begin{array}{rr} -2 & -1 \\ -2 & 1 \end{array}\right) \times(0) . \end{aligned} \nonumber \]

    Successive matrix multiplication results in

    \[\begin{aligned} &x_{1}(t)=\frac{1}{4}\left(2 x_{1}(0)+x_{2}(0)\right) e^{3 t}+\frac{1}{4}\left(2 x_{1}(0)-x_{2}(0)\right) e^{-t}, \\ &x_{2}(t)=\frac{1}{2}\left(2 x_{1}(0)+x_{2}(0)\right) e^{3 t}-\frac{1}{2}\left(2 x_{1}(0)-x_{2}(0)\right) e^{-t}, \end{aligned} \nonumber \]

    which is the same solution as previously found, but here the \(c_{1}\) and \(c_{2}\) free coefficients are replaced by the initial values of \(x(0)\).


    This page titled 10.3: Solution by the Matrix Exponential is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.