Processing math: 74%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

8.6: Convolution

( \newcommand{\kernel}{\mathrm{null}\,}\)

In this section we consider the problem of finding the inverse Laplace transform of a product H(s)=F(s)G(s), where F and G are the Laplace transforms of known functions f and g. To motivate our interest in this problem, consider the initial value problem

ay+by+cy=f(t),y(0)=0,y(0)=0.

Taking Laplace transforms yields

(as2+bs+c)Y(s)=F(s),

so

Y(s)=F(s)G(s),

where

G(s)=1as2+bs+c.

Until now wen’t been interested in the factorization indicated in Equation ???, since we dealt only with differential equations with specific forcing functions. Hence, we could simply do the indicated multiplication in Equation ??? and use the table of Laplace transforms to find y=L1(Y). However, this isn’t possible if we want a formula for y in terms of f, which may be unspecified.

To motivate the formula for L1(FG), consider the initial value problem

yay=f(t),y(0)=0,

which we first solve without using the Laplace transform. The solution of the differential equation in Equation ??? is of the form y=ueat where

u=eatf(t).

Integrating this from 0 to t and imposing the initial condition u(0)=y(0)=0 yields

u=t0eaτf(τ)dτ.

Therefore

y(t)=eatt0eaτf(τ)dτ=t0ea(tτ)f(τ)dτ.

Now we’ll use the Laplace transform to solve Equation ??? and compare the result to Equation ???. Taking Laplace transforms in Equation ??? yields

(sa)Y(s)=F(s),

so

Y(s)=F(s)1sa,

which implies that

y(t)=L1(F(s)1sa).

If we now let g(t)=eat, so that

G(s)=1sa,

then Equation ??? and Equation ??? can be written as

y(t)=t0f(τ)g(tτ)dτ

and

y=L1(FG),

respectively. Therefore

L1(FG)=t0f(τ)g(tτ)dτ

in this case.

This motivates the next definition.

Definition 8.6.1 : Convolution

The convolution fg of two functions f and g is defined by

(fg)(t)=t0f(τ)g(tτ)dτ.

It can be shown (Exercise 8.6.6) that fg=gf; that is,

t0f(tτ)g(τ)dτ=t0f(τ)g(tτ)dτ.

Equation ??? shows that L1(FG)=fg in the special case where g(t)=eat. This next theorem states that this is true in general.

Theorem 8.6.2 : The Convolution Theorem

If L(f)=F and L(g)=G, then

L(fg)=FG.

A complete proof of the convolution theorem is beyond the scope of this book. However, we’ll assume that fg has a Laplace transform and verify the conclusion of the theorem in a purely computational way. By the definition of the Laplace transform,

L(fg)=0est(fg)(t)dt=0estt0f(τ)g(tτ)dτdt.

This iterated integral equals a double integral over the region shown in Figure 8.6.1 . Reversing the order of integration yields

L(fg)=0f(τ)τestg(tτ)dtdτ.

However, the substitution x=tτ shows that

τestg(tτ)dt=0es(x+τ)g(x)dx=esτ0esxg(x)dx=esτG(s).

Substituting this into Equation ??? and noting that G(s) is independent of τ yields

L(fg)=0esτf(τ)G(s)dτ=G(s)0estf(τ)dτ=F(s)G(s).

clipboard_ebe5779d463a7c367c1bdd55205d2beca.png
Figure 8.6.1
Example 8.6.1

Let

f(t)=eatandg(t)=ebt(ab).

Verify that L(fg)=L(f)L(g), as implied by the convolution theorem.

Solution

We first compute

(fg)=t0eaτeb(tτ)dτ=ebtt0e(ab)τdτ=ebteabτab|t0=ebt[e(ab)t1]ab=eatebtab

Since

eat1sa and ebt1sb,

it follows that

L(fg)=1ab[1sa1sb]=1(sa)(sb)=L(eat)L(ebt)=L(f)L(g).

A Formula for the Solution of an Initial Value Problem

The convolution theorem provides a formula for the solution of an initial value problem for a linear constant coefficient second order equation with an unspecified. The next three examples illustrate this.

Example 8.6.2

Find a formula for the solution of the initial value problem

y2y+y=f(t),y(0)=k0,y(0)=k1.

Solution

Taking Laplace transforms in Equation ??? yields

(s22s+1)Y(s)=F(s)+(k1+k0s)2k0.

Therefore

Y(s)=1(s1)2F(s)+k1+k0s2k0(s1)2=1(s1)2F(s)+k0s1+k1k0(s1)2.

From the table of Laplace transforms,

L1(k0s1+k1k0(s1)2)=et(k0+(k1k0)t).

Since

1(s1)2tetandF(s)f(t),

the convolution theorem implies that

L1(1(s1)2F(s))=t0τeτf(tτ)dτ.

Therefore the solution of Equation ??? is

y(t)=et(k0+(k1k0)t)+t0τeτf(tτ)dτ.

Example 8.6.3

Find a formula for the solution of the initial value problem

y+4y=f(t),y(0)=k0,y(0)=k1.

Solution

Taking Laplace transforms in Equation ??? yields

(s2+4)Y(s)=F(s)+k1+k0s.

Therefore

Y(s)=1(s2+4)F(s)+k1+k0ss2+4.

From the table of Laplace transforms,

L1(k1+k0ss2+4)=k0cos2t+k12sin2t.

Since

1(s2+4)12sin2tandF(s)f(t),

the convolution theorem implies that

L1(1(s2+4)F(s))=12t0f(tτ)sin2τdτ.

Therefore the solution of Equation ??? is

y(t)=k0cos2t+k12sin2t+12t0f(tτ)sin2τdτ.

Example 8.6.4

Find a formula for the solution of the initial value problem

y+2y+2y=f(t),y(0)=k0,y(0)=k1.

Solution

Taking Laplace transforms in Equation ??? yields

(s2+2s+2)Y(s)=F(s)+k1+k0s+2k0.

Therefore

Y(s)=1(s+1)2+1F(s)+k1+k0s+2k0(s+1)2+1=1(s+1)2+1F(s)+(k1+k0)+k0(s+1)(s+1)2+1

From the table of Laplace transforms,

L1((k1+k0)+k0(s+1)(s+1)2+1)=et((k1+k0)sint+k0cost).

Since

1(s+1)2+1etsintandF(s)f(t),

the convolution theorem implies that

L1(1(s+1)2+1F(s))=t0f(tτ)eτsinτdτ.

Therefore the solution of Equation ??? is

y(t)=et((k1+k0)sint+k0cost)+t0f(tτ)eτsinτdτ.

Evaluating Convolution Integrals

We’ll say that an integral of the form t0u(τ)v(tτ)dτ is a convolution integral. The convolution theorem provides a convenient way to evaluate convolution integrals.

Example 8.6.5

Evaluate the convolution integral

h(t)=t0(tτ)5τ7dτ.

Solution

We could evaluate this integral by expanding (tτ)5 in powers of τ and then integrating. However, the convolution theorem provides an easier way. The integral is the convolution of f(t)=t5 and g(t)=t7. Since

t55!s6 and t77!s8,

the convolution theorem implies that

h(t)5!7!s14=5!7!13!13!s14,

where we have written the second equality because

13!s14t13.

Hence,

h(t)=5!7!13!t13.

Example 8.6.6

Use the convolution theorem and a partial fraction expansion to evaluate the convolution integral

h(t)=t0sina(tτ)cosbτdτ(|a||b|).

Solution

Since

sinatas2+a2andcosbtss2+b2,

the convolution theorem implies that

H(s)=as2+a2ss2+b2.

Expanding this in a partial fraction expansion yields

H(s)=ab2a2[ss2+a2ss2+b2].

Therefore

h(t)=ab2a2(cosatcosbt).

Volterra Integral Equations

An equation of the form

y(t)=f(t)+t0k(tτ)y(τ)dτ

is a Volterra integral equation. Here f and k are given functions and y is unknown. Since the integral on the right is a convolution integral, the convolution theorem provides a convenient formula for solving Equation ???. Taking Laplace transforms in Equation ??? yields

Y(s)=F(s)+K(s)Y(s),

and solving this for Y(s) yields

Y(s)=F(s)1K(s).

We then obtain the solution of Equation ??? as y=L1(Y).

Example 8.6.7

Solve the integral equation

y(t)=1+2t0e2(tτ)y(τ)dτ.

Solution

Taking Laplace transforms in Equation \ref{eq:8.6.12} yields

Y(s)={1\over s}+{2\over s+2} Y(s),\nonumber

and solving this for Y(s) yields

Y(s)={1\over s}+{2\over s^2}.\nonumber

Hence,

y(t)=1+2t.\nonumber

Transfer Functions

The next theorem presents a formula for the solution of the general initial value problem

ay''+by'+cy=f(t),\quad y(0)=k_0,\quad y'(0)=k_1,\nonumber

where we assume for simplicity that f is continuous on [0,\infty) and that {\mathscr L}(f) exists. In Exercises 8.6.11-8.6.14 it is shown that the formula is valid under much weaker conditions on f.

Theorem 8.6.3

Suppose f is continuous on [0,\infty) and has a Laplace transform. Then the solution of the initial value problem

\label{eq:8.6.13} ay''+by'+cy=f(t),\quad y(0)=k_0,\quad y'(0)=k_1,

is

\label{eq:8.6.14} y(t)=k_0y_1(t)+k_1y_2(t)+\int_0^tw(\tau)f(t-\tau)\,d\tau,

where y_1 and y_2 satisfy

\label{eq:8.6.15} ay_1''+by_1'+cy_1=0,\quad y_1(0)=1,\quad y_1'(0)=0,

and

\label{eq:8.6.16} ay_2''+by_2'+cy_2=0,\quad y_2(0)=0,\quad y_2'(0)=1,

and

\label{eq:8.6.17} w(t)={1\over a}y_2(t).

Proof

Taking Laplace transforms in Equation \ref{eq:8.6.13} yields

p(s)Y(s)=F(s)+a(k_1+k_0s)+bk_0,\nonumber

where

p(s)=as^2+bs+c.\nonumber

Hence,

\label{eq:8.6.18} Y(s)=W(s)F(s)+V(s)

with

\label{eq:8.6.19} W(s)={1\over p(s)}

and

\label{eq:8.6.20} V(s)={a(k_1+k_0s)+bk_0\over p(s)}.

Taking Laplace transforms in Equation \ref{eq:8.6.15} and Equation \ref{eq:8.6.16} shows that

p(s)Y_1(s)=as+b\quad\mbox{and}\quad p(s)Y_2(s)=a.\nonumber

Therefore

Y_1(s)={as+b\over p(s)}\nonumber

and

\label{eq:8.6.21} Y_2(s)={a\over p(s)}.

Hence, Equation \ref{eq:8.6.20} can be rewritten as

V(s)=k_0Y_1(s)+k_1Y_2(s).\nonumber

Substituting this into Equation \ref{eq:8.6.18} yields

Y(s)=k_0Y_1(s)+k_1Y_2(s)+{1\over a}Y_2(s)F(s).\nonumber

Taking inverse transforms and invoking the convolution theorem yields Equation \ref{eq:8.6.14}. Finally, Equation \ref{eq:8.6.19} and Equation \ref{eq:8.6.21} imply Equation \ref{eq:8.6.17}.

It is useful to note from Equation \ref{eq:8.6.14} that y is of the form

y=v+h,\nonumber

where

v(t)=k_0y_1(t)+k_1y_2(t)\nonumber

depends on the initial conditions and is independent of the forcing function, while

h(t)=\int_0^tw(\tau)f(t-\tau)\, d\tau\nonumber

depends on the forcing function and is independent of the initial conditions. If the zeros of the characteristic polynomial

p(s)=as^2+bs+c\nonumber

of the complementary equation have negative real parts, then y_1 and y_2 both approach zero as t\to\infty, so \lim_{t\to\infty}v(t)=0 for any choice of initial conditions. Moreover, the value of h(t) is essentially independent of the values of f(t-\tau) for large \tau, since \lim_{\tau\to\infty}w(\tau)=0. In this case we say that v and h are transient and steady state components, respectively, of the solution y of Equation \ref{eq:8.6.13}. These definitions apply to the initial value problem of Example 8.6.4 , where the zeros of

p(s)=s^2+2s+2=(s+1)^2+1\nonumber

are -1\pm i. From Equation \ref{eq:8.6.10}, we see that the solution of the general initial value problem of Example 8.6.4 is y=v+h, where

v(t)=e^{-t}\left((k_1+k_0)\sin t+k_0\cos t\right)\nonumber

is the transient component of the solution and

h(t)=\int_0^t f(t-\tau)e^{-\tau}\sin\tau\,d\tau\nonumber

is the steady state component. The definitions don’t apply to the initial value problems considered in Examples 8.6.2 and 8.6.3 , since the zeros of the characteristic polynomials in these two examples don’t have negative real parts.

In physical applications where the input f and the output y of a device are related by Equation \ref{eq:8.6.13}, the zeros of the characteristic polynomial usually do have negative real parts. Then W={\mathscr L}(w) is called the transfer function of the device. Since

H(s)=W(s)F(s),\nonumber

we see that

W(s)={H(s)\over F(s)}\nonumber

is the ratio of the transform of the steady state output to the transform of the input.

Because of the form of

h(t)=\int_0^tw(\tau)f(t-\tau)\,d\tau,\nonumber

w is sometimes called the weighting function of the device, since it assigns weights to past values of the input f. It is also called the impulse response of the device, for reasons discussed in the next section.

Formula Equation \ref{eq:8.6.14} is given in more detail in Exercises 8.6.8-8.6.10 for the three possible cases where the zeros of p(s) are real and distinct, real and repeated, or complex conjugates, respectively.


This page titled 8.6: Convolution is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench.

Support Center

How can we help?