10.1: Overview
The matrix exponential is a powerful means for representing the solution to nn linear, constant coefficient, differential equations. The initial value problem for such a system may be written
\[x′(t) = Ax(t) \nonumber\]
\[x(0) = x_{0} \nonumber\]
where \(A\) is the n-by-n matrix of coefficients. By analogy to the 1-by-1 case we might expect
\[x(t) = e^{At}u \nonumber\]
to hold. Our expectations are granted if we properly define \(e^{At}\). Do you see why simply exponentiating each element of \(At\) will no suffice?
There are at least 4 distinct (but of course equivalent) approaches to properly defining \(e^{At}\). The first two are natural analogs of the single variable case while the latter two make use of heavier matrix algebra machinery.
- The Matrix Exponential as a Limit of Powers
- The Matrix Exponential as a sum of Powers
- The Matrix Exponential via the Laplace Transform
- The Matrix Exponential via Eigenvalues and Eigenvectors
Please visit each of these modules to see the definition and a number of examples.
For a concrete application of these methods to a real dynamical system, please visit the Mass-Spring-Damper-module.
Regardless of the approach, the matrix exponential may be shown to obey the 3 lovely properties
- \(\frac{d}{dt}(e^{At}) = Ae^{At} = e^{At}A\)
- \(e^{A(t_{1}+t_{2})} = e^{At_{1}}e^{At_{2}}\)
- \(e^{At}\) is nonsingular and \((e^{At})^{-1} = e^{-(At)}\)
Let us confirm each of these on the suite of examples used in the submodules.
If
\[A = \begin{pmatrix} {1}&{0}\\ {0}&{2} \end{pmatrix} \nonumber\]
then
\[e^{At} = \begin{pmatrix} {e^t}&{0}\\ {0}&{e^{2t}} \end{pmatrix} \nonumber\]
- \(\frac{d}{dt}(e^{At}) = \begin{pmatrix} {e^t}&{0}\\ {0}&{e^{2t}} \end{pmatrix} = \begin{pmatrix} {1}&{0}\\ {0}&{2} \end{pmatrix} \begin{pmatrix} {e^t}&{0}\\ {0}&{e^{2t}} \end{pmatrix}\)
- \(\begin{pmatrix} {e^{t_{1}+t_{2}}}&{0}\\ {0}&{e^{2t_{1}+2t_{2}}} \end{pmatrix} = \begin{pmatrix} {e^{t_{1}}e^{t_{2}}}&{0}\\ {0}&{e^{2t_{1}}e^{2t_{2}}} \end{pmatrix} = \begin{pmatrix} {e^{t_{1}}}&{0}\\ {0}&{e^{2t_{1}}} \end{pmatrix} \begin{pmatrix} {e^{t_{2}}}&{0}\\ {0}&{e^{2t_{2}}} \end{pmatrix}\)
- \((e^{At})^{-1} = \begin{pmatrix} {e^{-t}}&{0}\\ {0}&{e^{-(2t)}} \end{pmatrix} = e^{-(At)}\)
If
\[A = \begin{pmatrix} {0}&{1}\\ {-1}&{0} \end{pmatrix} \nonumber\]
then
\[e^{At} = \begin{pmatrix} {\cos(t)}&{\sin(t)}\\ {-\sin(t)}&{\cos(t)} \end{pmatrix} \nonumber\]
- \(\frac{d}{dt}(e^{At}) = \begin{pmatrix} {-\sin(t)}&{\cos(t)}\\ {-\cos(t)}&{-\sin(t)} \end{pmatrix}\) and \(Ae^{At} = \begin{pmatrix} {-\sin(t)}&{\cos(t)}\\ {-\cos(t)}&{-\sin(t)} \end{pmatrix}\)
- You will recognize this statement as a basic trig identity \(\begin{pmatrix} {\cos(t_{1}+t_{2})}&{\sin(t_{1}+t_{2})}\\ {-\sin(t_{1}+t_{2})}&{\cos(t_{1}+t_{2})} \end{pmatrix} = \begin{pmatrix} {\cos(t_{1})}&{\sin(t_{1})}\\ {-\sin(t_{1})}&{\cos(t_{1})} \end{pmatrix} \begin{pmatrix} {\cos(t_{2})}&{\sin(t_{2})}\\ {-\sin(t_{2})}&{\cos(t_{2})} \end{pmatrix}\)
- \((e^{At})^{-1} = \begin{pmatrix} {\cos(t)}&{-\sin(t)}\\ {\sin(t)}&{\cos(t)} \end{pmatrix} = \begin{pmatrix} {\cos(-t)}&{-\sin(-t)}\\ {\sin(-t)}&{\cos(-t)} \end{pmatrix} = e^{-(At)}\)
If
\[A = \begin{pmatrix} {0}&{1}\\ {0}&{0} \end{pmatrix} \nonumber\]
then
\[e^{At} = \begin{pmatrix} {1}&{t}\\ {0}&{1} \end{pmatrix} \nonumber\]
- \(\frac{d}{dt}(e^{At}) = \begin{pmatrix} {0}&{1}\\ {0}&{0} \end{pmatrix} = Ae^{At}\)
- \(\begin{pmatrix} {1}&{t_{1}+t_{2}}\\ {0}&{1} \end{pmatrix} = \begin{pmatrix} {1}&{t_{1}}\\ {0}&{1} \end{pmatrix} \begin{pmatrix} {1}&{t_{2}}\\ {0}&{1} \end{pmatrix}\)
- \(\begin{pmatrix} {1}&{t}\\ {0}&{1} \end{pmatrix}^{-1} = \begin{pmatrix} {1}&{-t}\\ {0}&{1} \end{pmatrix} = e^{-At}\)