$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 6.2: New variables

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

In order to understand the solution in all mathematical details we make a change of variables

$w = x + ct,\quad z = x - ct .$

We write $$u(x,t)=\bar u(w,z)$$. We find

\begin{align} \dfrac{\partial}{\partial x} u & = \dfrac{\partial}{\partial w}{\bar u}\dfrac{\partial}{\partial x} w + \dfrac{\partial}{\partial z} {\bar u}\dfrac{\partial}{\partial x} z = \dfrac{\partial}{\partial w}{\bar u} + \dfrac{\partial}{\partial z} {\bar u}, \nonumber\\ \dfrac{\partial^2}{\partial x^2} u & = \dfrac{\partial^2}{\partial w^2}{\bar u}+2 \dfrac{\partial^2}{\partial w \partial z} {\bar u}+ \dfrac{\partial}{\partial z} {\bar u}, \nonumber\\ \dfrac{\partial}{\partial t} u & = \dfrac{\partial}{\partial w}{\bar u}\dfrac{\partial}{\partial t} w + \dfrac{\partial}{\partial z} {\bar u}\dfrac{\partial}{\partial t} z = c\left(\dfrac{\partial}{\partial w}{\bar u} - \dfrac{\partial}{\partial z} {\bar u}\right), \nonumber\\ \dfrac{\partial^2}{\partial t^2} u & = c^2\left(\dfrac{\partial^2}{\partial w^2}{\bar u}-2 \dfrac{\partial^2}{\partial w \partial z} {\bar u}+ \dfrac{\partial}{\partial z} {\bar u} \right)\end{align}

We thus conclude that

$\dfrac{\partial^2}{\partial x^2} u(x,t) - \frac{1}{c^2} \dfrac{\partial^2}{\partial t^2} u(x,t) = 4\dfrac{\partial^2}{\partial w \partial z} {\bar u} = 0$

An equation of the type $$\dfrac{\partial^2}{\partial w \partial z} {\bar u} = 0$$ can easily be solved by subsequent integration with respect to $$z$$ and $$w$$. First solve for the $$z$$ dependence,

$\dfrac{\partial}{\partial w}{\bar u} = \Phi(w),$

where $$\Phi$$ is any function of $$w$$ only. Now solve this equation for the $$w$$ dependence, $\bar u(w,z) = \int \Phi(w) dw = F(w) + G(z)$ In other words, with $$F$$ and $$G$$ arbitrary functions.

### Infinite String

This equation is quite useful in practical applications. Let us first look at how to use this when we have an infinite system (no limits on $$x$$). Assume that we are treating a problem with initial conditions

$u(x,0) = f(x),\;\;\dfrac{\partial}{\partial t} u(x,0) = g(x).$

Let me assume $$f(\pm\infty)=0$$. I shall assume this also holds for $$F$$ and $$G$$ (we don’t have to, but this removes some arbitrary constants that don’t play a rôle in $$u$$). We find

\begin{align} F(x)+G(x) &=& f(x),\nonumber\\ c(F'(x)-G'(x)) & = g(x).\end{align}

he last equation can be massaged a bit to give

$F(x)-G(x) = \underbrace{\frac{1}{c}\int_0^x g(y) dy}_{=\Gamma(x)} + C$

Note that $$\Gamma$$ is the integral over $$g$$. So $$\Gamma$$ will always be a continuous function, even if $$g$$ is not!

And in the end we have

\begin{align} F(x) &=& \dfrac{1}{2} \left[f(x) +\Gamma(x)+C\right] \nonumber\\ G(x) &=& \dfrac{1}{2} \left[f(x) -\Gamma(x)-C\right] \end{align} Suppose we choose (for simplicity we take $$c=1 \text{m/s}$$)

$f(x) = \begin{cases} x+1 & \text{if -1<x<0} \\ 1-x & \text{if 0<x<1} \\ 0 & \text{elsewhere} \end{cases} .$

and $$g(x)=0$$. The solution is then simply given by $u(x,t) = \dfrac{1}{2} \left[f(x+t)+f(x-t)\right]. \label{eq:dal1}$ This can easily be solved graphically, as shown in Figure $$\PageIndex{1}$$.

Figure $$\PageIndex{1}$$:The graphical form of ([eq:dal1]), for (from left to right) $$t=0s$$,$$t=0.5s$$ and $$t=1s$$. The dashed lines are $$\dfrac{1}{2} f(x+t)$$ (leftward moving wave) and $$\dfrac{1}{2} f(x-t)$$ (rightward moving wave). The solid line is the sum of these two, and thus the solution $$u$$.

### Finite String

The case of a finite string is more complex. There we encounter the problem that even though $$f$$ and $$g$$ are only known for $$0<x<a$$, $$x\pm ct$$ can take any value from $$-\infty$$ to $$\infty$$. So we have to figure out a way to continue the function beyond the length of the string. The way to do that depends on the kind of boundary conditions: Here we shall only consider a string fixed at its ends. \begin{align} u(0,t)=u(a,t)=0,\nonumber\\ u(x,0)=f(x)\;\;\dfrac{\partial}{\partial t} u (x,0) = g(x).\end{align} Initially we can follow the approach for the infinite string as sketched above, and we find that \begin{align} F(x) &=& \dfrac{1}{2} \left[f(x) +\Gamma(x)+C\right], \nonumber\\ G(x) &=& \dfrac{1}{2} \left[f(x) -\Gamma(x)-C\right]. \end{align} Look at the boundary condition $$u(0,t)=0$$. It shows that $\dfrac{1}{2} [f(ct)+f(-ct)]+\dfrac{1}{2} [\Gamma(ct)-\Gamma(-ct)]=0.$ Now we understand that $$f$$ and $$\Gamma$$ are completely arbitrary functions – we can pick any form for the initial conditions we want. Thus the relation found above can only hold when both terms are zero \begin{align} f(x)&=&-f(-x),\nonumber\\ \Gamma(x)&=&\Gamma(x).\label{eq:refl1}\end{align} Now apply the other boundary condition, and find

\begin{align} f(a+x)&=&-f(a-x),\nonumber\\ \Gamma(a+x)&=&\Gamma(a-x).\label{eq:refl2}\end{align}

Figure $$\PageIndex{2}$$: A schematic representation of the reflection conditions ([eq:refl1],[eq:refl2]). The dashed line represents $$f$$ and the dotted line $$\Gamma$$.

The reflection conditions for $$f$$ and $$\Gamma$$ are similar to those for sines and cosines, and as we can see from Figure $$\PageIndex{2}$$ both $$f$$ and $$\Gamma$$ have period $$2a$$.