Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

4.8: D’Alembert solution of the wave equation

We have solved the wave equation by using Fourier series. But it is often more convenient to use the so-called d’Alembert solution to the wave equation3. While this solution can be derived using Fourier series as well, it is really an awkward use of those concepts. It is easier and more instructive to derive this solution by making a correct change of variables to get an equation that can be solved by simple integration.

Suppose we have the wave equation

\[y_{tt}=a^2 y_{xx}.\]

We wish to solve the equation (4.8.1) given the conditions

\[ \left. \begin{array}{ccc} y(0,t)=y(L,t)=0 & {\rm{for ~all~}} t, \\ y(x,0)=f(x) & 0<x<L, \\ y_t(x,0)=g(x) & 0<x<L. \end{array} \right.\]

4.8.1 Change of variables

We will transform the equation into a simpler form where it can be solved by simple integration. We change variables to \( \xi =x-at\), \( \eta =x+at\). The chain rule says:

\[ \frac{\partial}{\partial x} = \frac{\partial \xi}{\partial x} \frac{\partial}{\partial \xi}+\frac{\partial \eta}{\partial x}\frac{\partial}{\partial \eta}= \frac{\partial}{\partial \xi}+ \frac{\partial}{\partial \eta}, \\ \frac{\partial}{\partial t}= \frac{\partial \xi}{\partial t}\frac{\partial}{\partial \xi}+\frac{\partial \eta}{\partial t}\frac{\partial}{\partial \eta}= -a \frac{\partial}{\partial \xi}  + a\frac{\partial}{\partial \eta}.\]

We compute

\[ y_{xx}= \frac{\partial^2 y}{\partial x^2}= \left( \frac{\partial}{\partial \xi}+ \frac{\partial}{\partial \eta} \right) \left( \frac{\partial y}{\partial \xi}+ \frac{\partial y}{\partial \eta} \right)= \frac{\partial^2 y}{\partial \xi^2}+2 \frac{\partial^2 y}{\partial \xi \partial \eta}+ \frac{\partial^2 y}{\partial \eta^2}, \\ y_{tt}= \frac{\partial^2 y}{\partial ^2}= \left( -a \frac{\partial}{\partial \xi}+a \frac{\partial}{\partial \eta} \right) \left( -a \frac{\partial y}{\partial \xi}+ a \frac{\partial y}{\partial \eta} \right)= a^2 \frac{\partial^2 y}{\partial \xi^2}-2a^2 \frac{\partial^2 y}{\partial \xi \partial \eta}+a^2 \frac{\partial^2 y}{\partial \eta^2}. \]

In the above computations, we used the fact from calculus that \( \frac{\partial^2 y}{\partial \xi \partial \eta}=\frac{\partial^2 y}{\partial \eta \partial \xi}\). We plug what we got into the wave equation,

\[ 0=a^2 y_{xx}-y_{tt}=4a^2 \frac{\partial^2 y}{\partial \xi \partial \eta}= 4a^2 y_{ \xi \eta}.\]

Therefore, the wave equation (4.8.1) transforms into \( y_{ \xi \eta} =0 \). It is easy to find the general solution to this equation by integrating twice. Keeping \( \xi\) constant, we integrate with respect to \( \eta\) first4 and notice that the constant of integration depends on \( \xi\); for each \( \xi\) we might get a different constant of integration. We get \(y _{ \xi}=C( \xi)\). Next, we integrate with respect to \( \xi\) and notice that the constant of integration must depend on \( \eta\). Thus, \( y= \int C( \xi)d \xi+B( \eta) \). The solution must, therefore, be of the following form for some functions \(A( \xi)\) and \(B( \eta ) \) :

\[ y =A( \xi)+B( \eta)= A(x-at)+B(x+at).\]

The solution is a superposition of two functions (waves) travelling at speed \(a\) in opposite directions. The coordinates \(\xi\) and \(\eta\) are called the characteristic coordinates, and a similar technique can be applied to more complicated hyperbolic PDE.

4.8.2 D’Alembert’s formula

We know what any solution must look like, but we need to solve for the given side conditions. We will just give the formula and see that it works. First let \( F(x)\) denote the odd extension of \( f(x)\), and let \( G(x)\) denote the odd extension of \( g(x)\). Define

\[ A(x)= \frac{1}{2} F(x)- \frac{1}{2a} \int^x_0 G(s) ds,~~~~~B(x)= \frac{1}{2} F(x)+ \frac{1}{2a} \int^x_0 G(s) ds. \]

We claim this \( A(x)\) and \( B(x)\) give the solution. Explicitly, the solution is \(y(x,t)= A(x-at)+B(x+at)\) or in other words:

\[ y(x,t)= \frac{1}{2}F(x-at)- \frac{1}{2a} \int_0^{x-at} G(s)ds+ \frac{1}{2}F(x+at)+ \frac{1}{2a} \int_0^{x+at} G(s)ds \\ = \frac{F(x-at)+F(x+at)}{2} + \frac{1}{2a} \int_{x-at}^{x+at} G(s)ds.\]

Let us check that the d’Alembert formula really works.

\[ y(x,0)= \frac{1}{2}F(x)- \frac{1}{2a} \int_0^{x} G(s)ds+ \frac{1}{2}F(x)+ \frac{1}{2a} \int_0^{x} G(s)ds =F(x).\]

So far so good. Assume for simplicity \(F\) is differentiable. By the fundamental theorem of calculus we have

\[ y_t(x,t)= \frac{-a}{2}F'(x-at)+ \frac{1}{2}G(x-at)+ \frac{a}{2} F'(x+at)+ \frac{1}{2}G(x+at).\]

So

\[ y_t(x,0)= \frac{-a}{2}F'(x)+ \frac{1}{2}G(x)+ \frac{a}{2} F'(x)+ \frac{1}{2}G(x)=G(x).\]

Yay! We’re smoking now. OK, now the boundary conditions. Note that \(F(x)\) and \(G(x)\) are odd. Also \( \int_0^x G(s)ds\) is an even function of \(x\) because \(G(x)\) is odd (to see this fact, do the substitution \(s=-v\)). So

\[ y(0,t)= \frac{1}{2}F(-at)- \frac{1}{2a} \int_0^{-at} G(s)ds+ \frac{1}{2}F(at)+ \frac{1}{2a} \int_0^{at} G(s)ds \\ = \frac{-1}{2}F(at)- \frac{1}{2a} \int_0^{at} G(s)ds+ \frac{1}{2}F(at)+ \frac{1}{2a} \int_0^{at} G(s)ds=0 .\]

Note that \(F(x)\) and \(G(x)\) are \(2L\) periodic. We compute

\[ y(L,t)= \frac{1}{2}F(L-at)- \frac{1}{2a} \int_0^{L-at} G(s)ds+ \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{L+at} G(s)ds \\ = \frac{1}{2}F(-L-at)- \frac{1}{2a} \int_0^{L} G(s)ds- \frac{1}{2a} \int_0^{-at} G(s)ds +\\ + \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{L} G(s)ds+ \frac{1}{2a} \int_0^{at} G(s)ds \\ = \frac{-1}{2}F(L+at)- \frac{1}{2a} \int_0^{at} G(s)ds+ \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{at} G(s)ds=0.\]

And voilà, it works.

Example \(\PageIndex{1}\):

D’Alembert says that the solution is a superposition of two functions (waves) moving in the opposite direction at “speed” \(a\). To get an idea of how it works, let us work out an example. Consider the simpler setup

\[ y_{tt}=y_{xx}, \\ y(0,t)=y(1,t)=0, \\ y(x,0)=f(x), \\ y_t(x,0)=0. \]

Here \(f(x)\) is an impulse of height 1 centered at \(x=0.5\):

\[ f(x) = \left\{ \begin{array}{ccc} 0 & {\rm{if}} & 0 \leq x < 0.45, \\ 20(x-0.45) & {\rm{if}} & 0 \leq x < 0.45, \\ 20(0.55-x) & {\rm{if}} & 0.45 \leq x < 0.55 \\ 0 & {\rm{if}} & 0.55 \leq x \leq 1. \end{array} \right.\] 

The graph of this impulse is the top left plot in Figure 4.21.

Let \(F(x)\) be the odd periodic extension of \(f(x)\). Then from (4.8.8) we know that the solution is given as

\[ y(x,t)= \frac{F(x-t)+F(x+t)}{2}.\]

It is not hard to compute specific values of \( y(x,t)\). For example, to compute \(y(0.1,0.6)\) we notice \(x-t=-0.5\) and \(x+t=0.7\). Now \(F(-0.5)=-f(0.5)=-20(0.55-0.5)=-1\) and \(F(0.7)=f(0.7)=0\). Hence \(y(0.1,0.6)= \frac{-1+0}{2}=-0.5\). As you can see the d’Alembert solution is much easier to actually compute and to plot than the Fourier series solution. See Figure 4.21 for plots of the solution \(y\) for several different \(t\).

PIC PIC

PIC PIC

Figure 4.21: Plot of the d’Alembert solution for \(t=0, t=0.2, t=0.4,\) and \(t=0.6\).

4.8.3 Another way to solve for the side conditions

It is perhaps easier and more useful to memorize the procedure rather than the formula itself. The important thing to remember is that a solution to the wave equation is a superposition of two waves traveling in opposite directions. That is,

\[y(x,t)=A(x-at)+B(x+at).\]

If you think about it, the exact formulas for \(A\) and \(B\) are not hard to guess once you realize what kind of side conditions \(y(x,t)\) is supposed to satisfy. Let us give the formula again, but slightly differently. Best approach is to do this in stages. When \(g(x)=0\) (and hence \(G(x)=0\)) we have the solution

\[ \frac{F(x-at)+F(x+at)}{2}.\]

On the other hand, when \(f(x)=0\) (and hence \(F(x)=0\)), we let

\[H(x)=\int_0^x G(s)ds.\]

The solution in this case is

\[ \int_{x-at}^{x+at} G(s)ds = \frac{-H(x-at)+H(x+at)}{2a}.\]

By superposition we get a solution for the general side conditions (4.8.2) (when neither \(f(x)\) nor \(g(x)\) are identically zero).

\[ y(x,t)= \frac{F(x-at)+F(x+at)}{2} + \frac{-H(x-at)+H(x+at)}{2a}.\]

Do note the minus sign before the \(H\), and the \(a\) in the second denominator.

Exercise \(\PageIndex{1}\):

Check that the new formula (4.8.21) satisfies the side conditions (4.8.2).

Warning: Make sure you use the odd extensions \(F(x)\) and \(G(x)\), when you have formulas for \(f(x)\) and \(g(x)\). The thing is, those formulas in general hold only for \(0<x<L\), and are not usually equal to \(F(x)\) and \(G(x)\) for other \(x\).

Contributors

 

3Named after the French mathematician Jean le Rond d’Alembert (1717 – 1783).

4We can just as well integrate with ξ  first, if we wish.