
# 5.2: The Laplace Transform


The Laplace Transform is typically credited with taking dynamical problems into static problems. Recall that the Laplace Transform of the function $$h$$ is

$\mathscr{L} (h(s)) \equiv \int_{0}^{\infty} e^{-(st)} h(t) dt \nonumber$

MATLAB is very adept at such things. For example:

The Laplace Transform in MATLAB

	>> syms t

>> laplace(exp(t))

ans = 1/(s-1)

>> laplace(t*(exp(-t))

ans = 1/(s+1)^2


The Laplace Transform of a matrix of functions is simply the matrix of Laplace transforms of the individual elements.

Definition: Laplace Transform of a matrix of fucntions

$\mathscr{L} (\begin{pmatrix} {e^{t}}\\ {te^{-t}} \end{pmatrix}) = \begin{pmatrix} {\frac{1}{s-1}}\\ {\frac{1}{(s+1)^2}} \end{pmatrix} \nonumber$

Now, in preparing to apply the Laplace transform to our equation from the dynamic strang quartet module:

$\textbf{x}' = B \textbf{x}+\textbf{g}$

we write it as

$\mathscr{L} (\frac{dx}{dt}) = \mathscr{L}(B \textbf{x}+\textbf{g})$

and so must determine how $$\mathscr{L}$$ acts on derivatives and sums. With respect to the latter it follows directly from the definition that

\begin{align*} \mathscr{L}(B \textbf{x}+\textbf{g}) &= \mathscr{L}(B \textbf{x})+\mathscr{L}(\textbf{g}) \\[4pt] &= B \mathscr{L}(\textbf{x})+\mathscr{L}(\textbf{g}) \end{align*}

Regarding its effect on the derivative we find, on integrating by parts, that

\begin{align} \mathscr{L} \left(\frac{d \textbf{x}}{dt}\right) &= \int_{0}^{\infty} e^{-(st)} \frac{d \textbf{x}(t)}{dt} dt \\[4pt] &= \textbf{x}(t) \left. e^{-(st)} \right|_{0}^{\infty}+s\int_{0}^{\infty} e^{-(st)} \textbf{x}(t) dt \end{align}

Supposing that $$x$$ and $$s$$ are such that $$x(t) e^{-(st)} \rightarrow 0$$ as $$t \rightarrow \infty$$ we arrive at

$\mathscr{L} (\frac{d \textbf{x}}{dt}) = s\mathscr{L} (\textbf{x})-x(0) \nonumber$

Now, upon substituting Equation 2 and Equation 3 into Equation 1 we find

$s \mathscr{L} (\textbf{x})- \textbf{x}(0) = B \mathscr{L}(\textbf{x})+\mathscr{L}(\textbf{g}) \nonumber$

which is easily recognized to be a linear system for $$\mathscr{L}(\textbf{x})$$

$(\textbf{s}I-B) \mathscr{L}(\textbf{x}) = \mathscr{L}(\textbf{g})+x(0) \nonumber$

The only thing that distinguishes this system from those encountered since our first brush with these systems is the presence of the complex variable $$s$$. This complicates the mechanical steps of Gaussian Elimination or the Gauss-Jordan Method but the methods indeed apply without change. Taking up the latter method, we write

$\mathscr{L}(\textbf{x}) = (sI-B)^{-1} (\mathscr{L}(\textbf{g})+x(0)) \nonumber$

The matrix $$(sI-B)^{-1}$$ is typically called the transfer function or resolvent, associated with $$B$$, at $$s$$. We turn to MATLAB for its symbolic calculation. (for more information, see the tutorial on MATLAB's symbolic toolbox). For example,

	>> B = [2 -1; -1 2]

>> R = inv(s*eye(2)-B)

R =

[ (s-2)/(s*s-4*s+3), -1/(s*s-4*s+3)]

[ -1/(s*s-4*s+3), (s-2)/(s*s-4*s+3)]


We note that $$(sI-B)^{-1}$$ well defined except at the roots of the quadratic, $$s^{2}-4s+3$$ determinant of $$(sI-B)$$ and is often referred to as the characteristic polynomial of $$B$$. Its roots are called the eigenvalues of $$B$$.

Example $$\PageIndex{1}$$

Let us take the $$B$$ matrix of the dynamic Strang quartet module with the parameter choices specified in fib3.m, namely

$B = \begin{pmatrix} {-0.135}&{0.125}&{0}\\ {0.5}&{-1.01}&{0.5}\\ {0}&{0.5}&{-0.51} \end{pmatrix} \nonumber$

The associated $$(sI-B)^{-1}$$ is a bit bulky (please run fig3.m) so we display here only the denominator of each term, i.e.,

$s^3+1.655s^2+0.4078s+0.0039 \nonumber$

Assuming a current stimulus of the form $$i_{0}(t) = \frac{t^{3}e^{-\frac{t}{6}}}{10000}$$ and $$E_{m} = 0$$ brings

$\mathscr{L}(\textbf{g})(s) = \begin{pmatrix} {\frac{0.191}{(s+\frac{1}{6})^{4}}}\\ {0}\\ {0}\\ {0} \end{pmatrix} \nonumber$

and so Equation persists in

\begin{align*} \mathscr{L}(\textbf{x}) &= (sI-B)^{-1} \mathscr{L}(\textbf{g}) \\[4pt] &= \frac{0.191}{(s+\frac{1}{6})^{4}(s^3+1.655s^2+0.4078s+0.0039)} \begin{pmatrix} {s^2+1.5s+0.27}\\ {0.5s+0.26}\\ {0.2497} \end{pmatrix} \end{align*}

Now comes the rub. A simple linear solve (or inversion) has left us with the Laplace transform of $$\textbf{x}$$. The accursed No Free Lunch Theorem

We shall have to do some work in order to recover $$\textbf{x}$$ from $$\mathscr{L}(\textbf{x})$$ confronts us. We shall face it down in the Inverse Laplace module.