8.6: Convolution
( \newcommand{\kernel}{\mathrm{null}\,}\)
In this section we consider the problem of finding the inverse Laplace transform of a product H(s)=F(s)G(s), where F and G are the Laplace transforms of known functions f and g. To motivate our interest in this problem, consider the initial value problem
ay″+by′+cy=f(t),y(0)=0,y′(0)=0.
Taking Laplace transforms yields
(as2+bs+c)Y(s)=F(s),
so
Y(s)=F(s)G(s),
where
G(s)=1as2+bs+c.
Until now wen’t been interested in the factorization indicated in Equation ???, since we dealt only with differential equations with specific forcing functions. Hence, we could simply do the indicated multiplication in Equation ??? and use the table of Laplace transforms to find y=L−1(Y). However, this isn’t possible if we want a formula for y in terms of f, which may be unspecified.
To motivate the formula for L−1(FG), consider the initial value problem
y′−ay=f(t),y(0)=0,
which we first solve without using the Laplace transform. The solution of the differential equation in Equation ??? is of the form y=ueat where
u′=e−atf(t).
Integrating this from 0 to t and imposing the initial condition u(0)=y(0)=0 yields
u=∫t0e−aτf(τ)dτ.
Therefore
y(t)=eat∫t0e−aτf(τ)dτ=∫t0ea(t−τ)f(τ)dτ.
Now we’ll use the Laplace transform to solve Equation ??? and compare the result to Equation ???. Taking Laplace transforms in Equation ??? yields
(s−a)Y(s)=F(s),
so
Y(s)=F(s)1s−a,
which implies that
y(t)=L−1(F(s)1s−a).
If we now let g(t)=eat, so that
G(s)=1s−a,
then Equation ??? and Equation ??? can be written as
y(t)=∫t0f(τ)g(t−τ)dτ
and
y=L−1(FG),
respectively. Therefore
L−1(FG)=∫t0f(τ)g(t−τ)dτ
in this case.
This motivates the next definition.
The convolution f∗g of two functions f and g is defined by
(f∗g)(t)=∫t0f(τ)g(t−τ)dτ.
It can be shown (Exercise 8.6.6) that f∗g=g∗f; that is,
∫t0f(t−τ)g(τ)dτ=∫t0f(τ)g(t−τ)dτ.
Equation ??? shows that L−1(FG)=f∗g in the special case where g(t)=eat. This next theorem states that this is true in general.
If L(f)=F and L(g)=G, then
L(f∗g)=FG.
A complete proof of the convolution theorem is beyond the scope of this book. However, we’ll assume that f∗g has a Laplace transform and verify the conclusion of the theorem in a purely computational way. By the definition of the Laplace transform,
L(f∗g)=∫∞0e−st(f∗g)(t)dt=∫∞0e−st∫t0f(τ)g(t−τ)dτdt.
This iterated integral equals a double integral over the region shown in Figure 8.6.1 . Reversing the order of integration yields
L(f∗g)=∫∞0f(τ)∫∞τe−stg(t−τ)dtdτ.
However, the substitution x=t−τ shows that
∫∞τe−stg(t−τ)dt=∫∞0e−s(x+τ)g(x)dx=e−sτ∫∞0e−sxg(x)dx=e−sτG(s).
Substituting this into Equation ??? and noting that G(s) is independent of τ yields
L(f∗g)=∫∞0e−sτf(τ)G(s)dτ=G(s)∫∞0e−stf(τ)dτ=F(s)G(s).

Let
f(t)=eatandg(t)=ebt(a≠b).
Verify that L(f∗g)=L(f)L(g), as implied by the convolution theorem.
Solution
We first compute
(f∗g)=∫t0eaτeb(t−τ)dτ=ebt∫t0e(a−b)τdτ=ebtea−bτa−b|t0=ebt[e(a−b)t−1]a−b=eat−ebta−b
Since
eat↔1s−a and ebt↔1s−b,
it follows that
L(f∗g)=1a−b[1s−a−1s−b]=1(s−a)(s−b)=L(eat)L(ebt)=L(f)L(g).
A Formula for the Solution of an Initial Value Problem
The convolution theorem provides a formula for the solution of an initial value problem for a linear constant coefficient second order equation with an unspecified. The next three examples illustrate this.
Find a formula for the solution of the initial value problem
y″−2y′+y=f(t),y(0)=k0,y′(0)=k1.
Solution
Taking Laplace transforms in Equation ??? yields
(s2−2s+1)Y(s)=F(s)+(k1+k0s)−2k0.
Therefore
Y(s)=1(s−1)2F(s)+k1+k0s−2k0(s−1)2=1(s−1)2F(s)+k0s−1+k1−k0(s−1)2.
From the table of Laplace transforms,
L−1(k0s−1+k1−k0(s−1)2)=et(k0+(k1−k0)t).
Since
1(s−1)2↔tetandF(s)↔f(t),
the convolution theorem implies that
L−1(1(s−1)2F(s))=∫t0τeτf(t−τ)dτ.
Therefore the solution of Equation ??? is
y(t)=et(k0+(k1−k0)t)+∫t0τeτf(t−τ)dτ.
Find a formula for the solution of the initial value problem
y″+4y=f(t),y(0)=k0,y′(0)=k1.
Solution
Taking Laplace transforms in Equation ??? yields
(s2+4)Y(s)=F(s)+k1+k0s.
Therefore
Y(s)=1(s2+4)F(s)+k1+k0ss2+4.
From the table of Laplace transforms,
L−1(k1+k0ss2+4)=k0cos2t+k12sin2t.
Since
1(s2+4)↔12sin2tandF(s)↔f(t),
the convolution theorem implies that
L−1(1(s2+4)F(s))=12∫t0f(t−τ)sin2τdτ.
Therefore the solution of Equation ??? is
y(t)=k0cos2t+k12sin2t+12∫t0f(t−τ)sin2τdτ.
Find a formula for the solution of the initial value problem
y″+2y′+2y=f(t),y(0)=k0,y′(0)=k1.
Solution
Taking Laplace transforms in Equation ??? yields
(s2+2s+2)Y(s)=F(s)+k1+k0s+2k0.
Therefore
Y(s)=1(s+1)2+1F(s)+k1+k0s+2k0(s+1)2+1=1(s+1)2+1F(s)+(k1+k0)+k0(s+1)(s+1)2+1
From the table of Laplace transforms,
L−1((k1+k0)+k0(s+1)(s+1)2+1)=e−t((k1+k0)sint+k0cost).
Since
1(s+1)2+1↔e−tsintandF(s)↔f(t),
the convolution theorem implies that
L−1(1(s+1)2+1F(s))=∫t0f(t−τ)e−τsinτdτ.
Therefore the solution of Equation ??? is
y(t)=e−t((k1+k0)sint+k0cost)+∫t0f(t−τ)e−τsinτdτ.
Evaluating Convolution Integrals
We’ll say that an integral of the form ∫t0u(τ)v(t−τ)dτ is a convolution integral. The convolution theorem provides a convenient way to evaluate convolution integrals.
Evaluate the convolution integral
h(t)=∫t0(t−τ)5τ7dτ.
Solution
We could evaluate this integral by expanding (t−τ)5 in powers of τ and then integrating. However, the convolution theorem provides an easier way. The integral is the convolution of f(t)=t5 and g(t)=t7. Since
t5↔5!s6 and t7↔7!s8,
the convolution theorem implies that
h(t)↔5!7!s14=5!7!13!13!s14,
where we have written the second equality because
13!s14↔t13.
Hence,
h(t)=5!7!13!t13.
Use the convolution theorem and a partial fraction expansion to evaluate the convolution integral
h(t)=∫t0sina(t−τ)cosbτdτ(|a|≠|b|).
Solution
Since
sinat↔as2+a2andcosbt↔ss2+b2,
the convolution theorem implies that
H(s)=as2+a2ss2+b2.
Expanding this in a partial fraction expansion yields
H(s)=ab2−a2[ss2+a2−ss2+b2].
Therefore
h(t)=ab2−a2(cosat−cosbt).
Volterra Integral Equations
An equation of the form
y(t)=f(t)+∫t0k(t−τ)y(τ)dτ
is a Volterra integral equation. Here f and k are given functions and y is unknown. Since the integral on the right is a convolution integral, the convolution theorem provides a convenient formula for solving Equation ???. Taking Laplace transforms in Equation ??? yields
Y(s)=F(s)+K(s)Y(s),
and solving this for Y(s) yields
Y(s)=F(s)1−K(s).
We then obtain the solution of Equation ??? as y=L−1(Y).
Solve the integral equation
y(t)=1+2∫t0e−2(t−τ)y(τ)dτ.
Solution
Taking Laplace transforms in Equation \ref{eq:8.6.12} yields
Y(s)={1\over s}+{2\over s+2} Y(s),\nonumber
and solving this for Y(s) yields
Y(s)={1\over s}+{2\over s^2}.\nonumber
Hence,
y(t)=1+2t.\nonumber
Transfer Functions
The next theorem presents a formula for the solution of the general initial value problem
ay''+by'+cy=f(t),\quad y(0)=k_0,\quad y'(0)=k_1,\nonumber
where we assume for simplicity that f is continuous on [0,\infty) and that {\mathscr L}(f) exists. In Exercises 8.6.11-8.6.14 it is shown that the formula is valid under much weaker conditions on f.
Suppose f is continuous on [0,\infty) and has a Laplace transform. Then the solution of the initial value problem
\label{eq:8.6.13} ay''+by'+cy=f(t),\quad y(0)=k_0,\quad y'(0)=k_1,
is
\label{eq:8.6.14} y(t)=k_0y_1(t)+k_1y_2(t)+\int_0^tw(\tau)f(t-\tau)\,d\tau,
where y_1 and y_2 satisfy
\label{eq:8.6.15} ay_1''+by_1'+cy_1=0,\quad y_1(0)=1,\quad y_1'(0)=0,
and
\label{eq:8.6.16} ay_2''+by_2'+cy_2=0,\quad y_2(0)=0,\quad y_2'(0)=1,
and
\label{eq:8.6.17} w(t)={1\over a}y_2(t).
- Proof
-
Taking Laplace transforms in Equation \ref{eq:8.6.13} yields
p(s)Y(s)=F(s)+a(k_1+k_0s)+bk_0,\nonumber
where
p(s)=as^2+bs+c.\nonumber
Hence,
\label{eq:8.6.18} Y(s)=W(s)F(s)+V(s)
with
\label{eq:8.6.19} W(s)={1\over p(s)}
and
\label{eq:8.6.20} V(s)={a(k_1+k_0s)+bk_0\over p(s)}.
Taking Laplace transforms in Equation \ref{eq:8.6.15} and Equation \ref{eq:8.6.16} shows that
p(s)Y_1(s)=as+b\quad\mbox{and}\quad p(s)Y_2(s)=a.\nonumber
Therefore
Y_1(s)={as+b\over p(s)}\nonumber
and
\label{eq:8.6.21} Y_2(s)={a\over p(s)}.
Hence, Equation \ref{eq:8.6.20} can be rewritten as
V(s)=k_0Y_1(s)+k_1Y_2(s).\nonumber
Substituting this into Equation \ref{eq:8.6.18} yields
Y(s)=k_0Y_1(s)+k_1Y_2(s)+{1\over a}Y_2(s)F(s).\nonumber
Taking inverse transforms and invoking the convolution theorem yields Equation \ref{eq:8.6.14}. Finally, Equation \ref{eq:8.6.19} and Equation \ref{eq:8.6.21} imply Equation \ref{eq:8.6.17}.
It is useful to note from Equation \ref{eq:8.6.14} that y is of the form
y=v+h,\nonumber
where
v(t)=k_0y_1(t)+k_1y_2(t)\nonumber
depends on the initial conditions and is independent of the forcing function, while
h(t)=\int_0^tw(\tau)f(t-\tau)\, d\tau\nonumber
depends on the forcing function and is independent of the initial conditions. If the zeros of the characteristic polynomial
p(s)=as^2+bs+c\nonumber
of the complementary equation have negative real parts, then y_1 and y_2 both approach zero as t\to\infty, so \lim_{t\to\infty}v(t)=0 for any choice of initial conditions. Moreover, the value of h(t) is essentially independent of the values of f(t-\tau) for large \tau, since \lim_{\tau\to\infty}w(\tau)=0. In this case we say that v and h are transient and steady state components, respectively, of the solution y of Equation \ref{eq:8.6.13}. These definitions apply to the initial value problem of Example 8.6.4 , where the zeros of
p(s)=s^2+2s+2=(s+1)^2+1\nonumber
are -1\pm i. From Equation \ref{eq:8.6.10}, we see that the solution of the general initial value problem of Example 8.6.4 is y=v+h, where
v(t)=e^{-t}\left((k_1+k_0)\sin t+k_0\cos t\right)\nonumber
is the transient component of the solution and
h(t)=\int_0^t f(t-\tau)e^{-\tau}\sin\tau\,d\tau\nonumber
is the steady state component. The definitions don’t apply to the initial value problems considered in Examples 8.6.2 and 8.6.3 , since the zeros of the characteristic polynomials in these two examples don’t have negative real parts.
In physical applications where the input f and the output y of a device are related by Equation \ref{eq:8.6.13}, the zeros of the characteristic polynomial usually do have negative real parts. Then W={\mathscr L}(w) is called the transfer function of the device. Since
H(s)=W(s)F(s),\nonumber
we see that
W(s)={H(s)\over F(s)}\nonumber
is the ratio of the transform of the steady state output to the transform of the input.
Because of the form of
h(t)=\int_0^tw(\tau)f(t-\tau)\,d\tau,\nonumber
w is sometimes called the weighting function of the device, since it assigns weights to past values of the input f. It is also called the impulse response of the device, for reasons discussed in the next section.
Formula Equation \ref{eq:8.6.14} is given in more detail in Exercises 8.6.8-8.6.10 for the three possible cases where the zeros of p(s) are real and distinct, real and repeated, or complex conjugates, respectively.