# 4.6: PDEs, separation of variables, and the heat equation

- Last updated

- Save as PDF

- Page ID
- 32208

Let us recall that a *partial differential equation* or PDE is an equation containing the partial derivatives with respect to several independent variables. Solving PDEs will be our main application of Fourier series.

A PDE is said to be *linear *if the dependent variable and its derivatives appear at most to the first power and in no functions. We will only talk about linear PDEs. Together with a PDE, we usually have specified some *boundary conditions*, where the value of the solution or its derivatives is specified along the boundary of a region, and/or some *initial conditions* where the value of the solution or its derivatives is specified for some initial time. Sometimes such conditions are mixed together and we will refer to them simply as *side conditions*.

We will study three specific partial differential equations, each one representing a more general class of equations. First, we will study the *heat equation*, which is an example of a *parabolic *PDE. Next, we will study the *wave equation*, which is an example of a *hyperbolic PDE*. Finally, we will study the *Laplace equation*, which is an example of an *elliptic PDE*. Each of our examples will illustrate behavior that is typical for the whole class.

## Heat on an Insulated Wire

Let us first study the heat equation. Suppose that we have a wire (or a thin metal rod) of length \(L\) that is insulated except at the endpoints. Let \(x\) denote the position along the wire and let \(t\) denote time. See Figure \(\PageIndex{1}\).

Let \(u(x,t)\) denote the temperature at point \(x\) at time \(t\). The equation governing this setup is the so-called* one-dimensional heat equation*:

\[\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}, \]

where \(k>0\) is a constant (the *thermal conductivity* of the material). That is, the change in heat at a specific point is proportional to the second derivative of the heat along the wire. This makes sense; if at a fixed \(t\) the graph of the heat distribution has a maximum (the graph is concave down), then heat flows away from the maximum. And vice-versa.

We will generally use a more convenient notation for partial derivatives. We will write \(u_t\) instead of \( \frac{\partial u}{\partial t}\), and we will write \(u_{xx}\) instead of \(\frac{\partial^2 u}{\partial x^2} \). With this notation the heat equation becomes

\[ u_t=ku_{xx}.\]

For the heat equation, we must also have some boundary conditions. We assume that the ends of the wire are either exposed and touching some body of constant heat, or the ends are insulated. For example, if the ends of the wire are kept at temperature 0, then we must have the conditions

\[ u(0,t)=0 \quad\text{and}\quad u(L,t)=0. \]

If, on the other hand, the ends are also insulated we get the conditions

\[ u_x(0,t)=0 \quad\text{and}\quad u_x(L,t)=0. \]

Let us see why that is so. If \(u_{x}\) is positive at some point \(x_{0}\), then at a particular time, \(u\) is smaller to the left of \(x_{0}\), and higher to the right of \(x_{0}\). Heat is flowing from high heat to low heat, that is to the left. On the other hand if \(u_{x}\) is negative then heat is again flowing from high heat to low heat, that is to the right. So when \(u_{x}\) is zero, that is a point through which heat is not flowing. In other words, \(u_{x}(0,t)=0\) means no heat is flowing in or out of the wire at the point \(x=0\).

We have two conditions along the \(x\)-axis as there are two derivatives in the \(x\) direction. These side conditions are said to be *homogeneous *(i.e., \(u\) or a derivative of \(u\) is set to zero).

We also need an initial condition—the temperature distribution at time \(t=0\). That is,

\[ u(x,0)=f(x),\]

for some known function \(f(x)\). This initial condition is not a homogeneous side condition.

## Separation of Variables

The heat equation is linear as \(u\) and its derivatives do not appear to any powers or in any functions. Thus the principle of superposition still applies for the heat equation (without side conditions). If \(u_1\) and \(u_2\) are solutions and \(c_1,c_2\) are constants, then \( u= c_1u_1+c_2u_2\) is also a solution.

Exercise \(\PageIndex{1}\)

Verify the principle of superposition for the heat equation.

Superposition also preserves some of the side conditions. In particular, if \(u_1\) and \(u_2\) are solutions that satisfy \(u(0,t)=0\) and \(u_(L,t)=0\), and \(c_1,\: c_2\) are constants, then \( u= c_1u_1+c_2u_2\) is still a solution that satisfies \(u(0,t)=0\) and\(u_(L,t)=0\). Similarly for the side conditions \(u_x(0,t)=0\) and \(u_x(L,t)=0\). In general, superposition preserves all homogeneous side conditions.

The method of *separation of variables* is to try to find solutions that are sums or products of functions of one variable. For example, for the heat equation, we try to find solutions of the form

\[ u(x,t)=X(x)T(t).\]

That the desired solution we are looking for is of this form is too much to hope for. What is perfectly reasonable to ask, however, is to find enough “building-block” solutions of the form \( u(x,t)=X(x)T(t)\) using this procedure so that the desired solution to the PDE is somehow constructed from these building blocks by the use of superposition.

Let us try to solve the heat equation

\[u_t=ku_{xx} \quad\text{with}\quad u(0,t)=0,\quad u(L,t)=0, \quad\text{and}\quad u(x,0)=f(x).\]

Let us guess \(u(x,t)=X(x)T(t)\). We will try to make this guess satisfy the differential equation, \(u_{t}=ku_{xx}\), and the homogeneous side conditions, \(u(0,t)=0\) and \(u(L,t)=0\). Then, as superposition preserves the differential equation and the homogeneous side conditions, we will try to build up a solution from these building blocks to solve the nonhomogeneous initial condition \(u(x,0)=f(x)\).

First we plug \(u(x,t)=X(x)T(t)\) into the heat equation to obtain

\[ X(x)T'(t)=kX''(x)T(t).\]

We rewrite as \[ \frac{T'(t)}{kT(t)}= \frac{X''(x)}{X(x)}.\]

This equation must hold for all \(x\) and all \(t\). But the left hand side does not depend on \(x\) and the right hand side does not depend on \(t\). Hence, each side must be a constant. Let us call this constant \(- \lambda\) (the minus sign is for convenience later). We obtain the two equations

\[ \frac{T'(t)}{kT(t)}= - \lambda = \frac{X''(x)}{X(x)}.\]

In other words

\[\begin{align}\begin{aligned} X''(x) + \lambda X(x) &=0, \\ T'(t) + \lambda k T(t)& =0.\end{aligned}\end{align}\]

The boundary condition \(u(0,t)=0\) implies \( X(0)T(t)=0\). We are looking for a nontrivial solution and so we can assume that \(T(t)\) is not identically zero. Hence \(X(0)=0\). Similarly, \(u(L,t)=0\) implies \(X(L)=0\). We are looking for nontrivial solutions \(X\) of the eigenvalue problem \( X'' + \lambda X = 0, X(0)=0, X(L)=0\). We have previously found that the only eigenvalues are \( \lambda_n = \frac{n^2 \pi^2}{L^2}\), for integers \( n \geq 1\), where eigenfunctions are \( \sin \left( \frac{n \pi}{L}x \right)\). Hence, let us pick the solutions

\[ X_n(x)= \sin \left( \frac{n \pi}{L}x \right).\]

The corresponding \(T_n\) must satisfy the equation

\[ T'_n(t) + \frac{n^2 \pi^2}{L^2}kT_n(t)=0.\]

By the method of integrating factor, the solution of this problem is

\[T_n(t)=e^{\frac{-n^2 \pi^2}{L^2}kt}.\]

It will be useful to note that \(T_n(0)=1\). Our building-block solutions are

\[u_n(x,t)=X_n(x)T_n(t)= \sin \left( \frac{n \pi}{L}x \right) e^{\frac{-n^2 \pi^2}{L^2}kt}. \]

We note that \( u_n(x,0)= \sin \left( \frac{n \pi}{L}x \right)\). Let us write \(f(x)\) as the sine series

\[ f(x)= \sum_{n=1}^{\infty} b_n \sin \left( \frac{n \pi}{L}x \right).\]

That is, we find the Fourier series of the odd periodic extension of \(f(x)\). We used the sine series as it corresponds to the eigenvalue problem for \(X(x)\) above. Finally, we use superposition to write the solution as

\[ u(x,t)= \sum^{\infty}_{n=1}b_n u_n (x,t)= \sum^{\infty}_{n=1}b_n \sin \left(\frac{n \pi}{L}x \right)e^{\frac{-n^2 \pi^2}{L^2}kt}.\]

Why does this solution work? First note that it is a solution to the heat equation by superposition. It satisfies \(u(0,t)=0\) and \(u(L,t)=0\), because \(x=0\) or \(x=L\) makes all the sines vanish. Finally, plugging in \(t=0\), we notice that \(T_n(0)=1\) and so

\[ u(x,0)= \sum^{\infty}_{n=1}b_n u_n (x,0)= \sum^{\infty}_{n=1}b_n \sin \left(\frac{n \pi}{L}x \right)=f(x).\]

Example \(\PageIndex{1}\)

Suppose that we have an insulated wire of length \(1\), such that the ends of the wire are embedded in ice (temperature 0). Let \(k=0.003\). Then suppose that initial heat distribution is \(u(x,0)=50x(1-x)\). See Figure \(\PageIndex{2}\).

We want to find the temperature function \(u(x,t)\). Let us suppose we also want to find when (at what \(t\)) does the maximum temperature in the wire drop to one half of the initial maximum of \(12.5\).

We are solving the following PDE problem:

\[\begin{align}\begin{aligned} u_t &=0.003u_{xx}, \\ u(0,t) &= u(1,t)=0, \\ u(x,0) &= 50x(1-x) ~~~~ {\rm{for~}} 0<x<1.\end{aligned}\end{align}\]

We write \(f(x)=50x(1-x)\) for \(0<x<1\) as a sine series. That is, \(f(x)= \sum^{\infty}_{n=1}b_n \sin(n \pi x)\), where

\[ b_n= 2 \int^1_0 50x(1-x) \sin(n \pi x)dx = \frac{200}{\pi^3 n^3}-\frac{200(-1)^n}{\pi^3 n^3}= \left\{ \begin{array}{cc} 0 & {\rm{if~}} n {\rm{~even,}} \\ \frac{400}{\pi^3 n^3} & {\rm{if~}} n {\rm{~odd.}} \end{array} \right.\]

The solution \(u(x,t)\), plotted in Figure \(\PageIndex{3}\) for \( 0 \leq t \leq 100\), is given by the series:

\[ u(x,t)= \sum^{\infty}_{\underset{n~ {\rm{odd}} }{n=1}} \frac{400}{\pi^3 n^3} \sin(n \pi x) e^{-n^2 \pi^2 0.003t}.\]

Finally, let us answer the question about the maximum temperature. It is relatively easy to see that the maximum temperature will always be at \(x=0.5\), in the middle of the wire. The plot of \(u(x,t)\) confirms this intuition.

If we plug in \(x=0.5\) we get

\[ u(0.5,t)= \sum^{\infty}_{\underset{n~ {\rm{odd}} }{n=1}} \frac{400}{\pi^3 n^3} \sin(n \pi 0.5) e^{-n^2 \pi^2 0.003t}.\]

For \(n=3\) and higher (remember \(n\) is only odd), the terms of the series are insignificant compared to the first term. The first term in the series is already a very good approximation of the function. Hence

\[u(0.5,t) \approx \frac{400}{\pi^3}e^{-\pi^2 0.003t}.\]

The approximation gets better and better as \(t\) gets larger as the other terms decay much faster. Let us plot the function \(0.5,t\), the temperature at the midpoint of the wire at time \(t\), in Figure \(\PageIndex{4}\). The figure also plots the approximation by the first term.

After \(t=5\) or so it would be hard to tell the difference between the first term of the series for \(u(x,t)\) and the real solution \(u(x,t)\). This behavior is a general feature of solving the heat equation. If you are interested in behavior for large enough \(t\), only the first one or two terms may be necessary.

Let us get back to the question of when is the maximum temperature one half of the initial maximum temperature. That is, when is the temperature at the midpoint \(12.5/2=6.25\). We notice on the graph that if we use the approximation by the first term we will be close enough. We solve

\[ 6.25=\frac{400}{\pi^3}e^{-\pi^2 0.003t}.\]

That is,

\[ t=\frac{\ln{\frac{6.25 \pi^3}{400}}}{-\pi^2 0.003} \approx 24.5.\]

So the maximum temperature drops to half at about \(t=24.5\).

We mention an interesting behavior of the solution to the heat equation. The heat equation “smoothes” out the function \(f(x)\) as \(t\) grows. For a fixed \(t\), the solution is a Fourier series with coefficients \(b_n e^{\frac{-n^2 \pi^2}{L^2}kt}\). If \(t>0\), then these coefficients go to zero faster than any \(\frac{1}{n^P}\) for any power \(p\). In other words, the Fourier series has infinitely many derivatives everywhere. Thus even if the function \(f(x)\) has jumps and corners, then for a fixed \(t>0\), the solution \(u(x,t)\) as a function of \(x\) is as smooth as we want it to be.

Example \(\PageIndex{2}\)

When the initial condition is already a sine series, then there is no need to compute anything, you just need to plug in. Consider \[u_t = 0.3 \, u_{xx}, \qquad u(0,t)=u(1,t)=0, \qquad u(x,0) = 0.1 \sin(\pi t) + \sin(2\pi t) .\] The solution is then \[u(x,t) = 0.1 \sin(\pi t) e^{- 0.3 \pi^2 t} + \sin(2 \pi t) e^{- 1.2 \pi^2 t} .\]

## Insulated Ends

Now suppose the ends of the wire are insulated. In this case, we are solving the equation

\[ u_t=ku_{xx}\quad\text{with}\quad u_x(0,t)=0,\quad u_x(L,t)=0,\quad\text{and}\quad u(x,0)=f(x).\]

Yet again we try a solution of the form \(u(x,t)=X(x)T(t)\). By the same procedure as before we plug into the heat equation and arrive at the following two equations

\[\begin{align}\begin{aligned} X''(x)+\lambda X(x) &=0, \\ T'(t)+\lambda kT(t) &=0.\end{aligned}\end{align}\]

At this point the story changes slightly. The boundary condition \(u_x(0,t)=0\) implies \(X'(0)T(t)=0\). Hence \(X'(0)=0\). Similarly, \(u_x(L,t)=0\) implies \(X'(L)=0\). We are looking for nontrivial solutions \(X\) of the eigenvalue problem \(X''+ \lambda X=0,\) \(X'(0)=0,\) \(X'(L)=0,\). We have previously found that the only eigenvalues are \(\lambda_n=\frac{n^2 \pi^2}{L^2}\), for integers \( n \geq 0\), where eigenfunctions are \(\cos(\frac{n \pi}{L})X\) (we include the constant eigenfunction). Hence, let us pick solutions

\[X_n(x)= \cos(\frac{n \pi}{L}x)\quad\text{and}\quad X_0(x)=1.\]

The corresponding \(T_n\) must satisfy the equation

\[T'_n(t)+ \frac{n^2 \pi^2}{L^2}kT_n(t)=0.\]

For \(n \geq 1\), as before,

\[T_n(t)= e^{\frac{-n^2 \pi^2}{L^2}kt}.\]

For \(n=0\), we have \(T'_0(t)=0\) and hence \(T_0(t)=1\). Our building-block solutions will be

\[u_n(x,t)=X_n(x)T_n(t)= \cos \left( \frac{n \pi}{L} x \right) e^{\frac{-n^2 \pi^2}{L^2}kt},\]

and

\[u_0(x,t)=1.\]

We note that \(u_n(x,0) =\cos \left( \frac{n \pi}{L} x \right)\). Let us write \(f\) using the cosine series

\[f(x)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n \cos \left( \frac{n \pi}{L} x \right).\]

That is, we find the Fourier series of the even periodic extension of \(f(x)\).

We use superposition to write the solution as

\[u(x,t)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n u_n(x,t)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n \cos \left( \frac{n \pi}{L} x \right) e^{\frac{-n^2 \pi^2}{L^2}kt}.\]

Example \(\PageIndex{3}\)

Let us try the same equation as before, but for insulated ends. We are solving the following PDE problem

\[\begin{align}\begin{aligned} u_t &=0.003u_{xx}, \\ u_x(0,t) &= u_x(1,t)=0, \\ u(x,0) &= 50x(1-x) ~~~~ {\rm{for~}} 0<x<1.\end{aligned}\end{align} \]

For this problem, we must find the cosine series of \(u(x,0)\). For \(0<x<1\) we have

\[ 50x(1-x)=\frac{25}{3}+\sum^{\infty}_{\underset{n~ {\rm{even}} }{n=2}} \left( \frac{-200}{\pi^2 n^2} \right) \cos(n \pi x).\]

The calculation is left to the reader. Hence, the solution to the PDE problem, plotted in Figure \(\PageIndex{5}\), is given by the series

\[ u(x,t)=\frac{25}{3}+\sum^{\infty}_{\underset{n~ {\rm{even}} }{n=2}} \left( \frac{-200}{\pi^2 n^2} \right) \cos(n \pi x) e^{-n^2 \pi^2 0.003t}.\]

Note in the graph that the temperature evens out across the wire. Eventually, all the terms except the constant die out, and you will be left with a uniform temperature of \(\frac{25}{3} \approx{8.33}\) along the entire length of the wire.

Let us expand on the last point. The constant term in the series is \[\frac{a_0}{2} = \frac{1}{L} \int_0^L f(x) \, dx .\] In other words, \(\frac{a_0}{2}\) is the average value of \(f(x)\), that is, the average of the initial temperature. As the wire is insulated everywhere, no heat can get out, no heat can get in. So the temperature tries to distribute evenly over time, and the average temperature must always be the same, in particular it is always \(\frac{a_0}{2}\). As time goes to infinity, the temperature goes to the constant \(\frac{a_0}{2}\) everywhere.

## Contributors and Attributions

- Jiří Lebl (Oklahoma State University).These pages were supported by NSF grants DMS-0900885 and DMS-1362337.