Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

3.0: Introduction to Boundary and Initial Conditions

 

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Boundary and Initial Conditions

As you all know, solutions to ordinary differential equations are usually not unique (integration constants appear in many places). This is of course equally a problem for PDE’s. PDE’s are usually specified through a set of boundary or initial conditions. A boundary condition expresses the behaviour of a function on the boundary (border) of its area of definition. An initial condition is like a boundary condition, but then for the time-direction. Not all boundary conditions allow for solutions, but usually the physics suggests what makes sense. Let me remind you of the situation for ordinary differential equations, one you should all be familiar with, a particle under the influence of a constant force, \[\begin{aligned} \dtt{x} &= a.\\ \intertext{Which leads to} \dt{x} &= at + v_0,\\ \intertext{and} {x} &= \half at^2 + v_0 t + x_0.\end{aligned}\] This contains two integration constants. Standard practice would be to specify \(\pdt{x}(t=0) = v_0\) and \(x(t=0)=x_0\). These are linear initial conditions (linear since they only involve \(x\) and its derivatives linearly), which have at most a first derivative in them. This one order difference between boundary condition and equation persists to PDE’s. It is kind of obviously that since the equation already involves that derivative, we can not specify the same derivative in a different equation.

The important difference between the arbitrariness of integration constants in PDE’s and ODE’s is that whereas solutions of ODE’s these are really constants, solutions of PDE’s contain arbitrary functions.

Let me give an example. Take \[u = y f(x)\] then \[\pdy u = f(x).\] This can be used to eliminate \(f\) from the first of the equations, giving \[u = y \pdy u\] which has the general solution \(u=yf(x)\).

One can construct more complicated examples. Consider \[u(x,y) = f(x+y) + g(x-y)\] which gives on double differentiation \[\pdxx u - \pdyy u = 0.\]

The problem is that without additional conditions the arbitrariness in the solutions makes it almost useless (if possible) to write down the general solution. We need additional conditions, that reduce this freedom. In most physical problems these are boundary conditions, that describes how the system behaves on its boundaries (for all times) and initial conditions, that specify the state of the system for an initial time \(t=0\). In the ODE problem discussed before we have two initial conditions (velocity and position at time \(t=0\)).

Explicit boundary conditions

For the problems of interest here we shall only consider linear boundary conditions, which express a linear relation between the function and its partial derivatives, e.g., \[u(x,y=0) + x\pdx{u}(x,y=0)=0.\] As before the maximal order of the derivative in the boundary condition is one order lower than the order of the PDE. For a second order differential equation we have three possible types of boundary condition

Dirichlet boundary condition

When we specify the value of \(u\) on the boundary, we speak of Dirichlet boundary conditions. An example for a vibrating string with its ends, at \(x=0\) and \(x=L\), fixed would be \[u(0,t) = u(L,t) = 0.\]

von Neumann boundary conditions

In multidimensional problems the derivative of a function w.r.t. to each of the variables forms a vector field (i.e., a function that takes a vector value at each point of space), usually called the gradient. For three variables this takes the form \[\mbox{grad} f(x,y,z) = {\vect{\nabla}} f(x,y,z) =\left( \pdx f(x,y,z),\pdy f(x,y,z),\pdz f(x,y,z)\right).\]

A sketch of the normal derivatives used in the von Neumann boundary conditions.
A sketch of the normal derivatives used in the von Neumann boundary conditions.

Typically we cannot specify the gradient at the boundary, since that is too restrictive to allow for solutions. We can – and in physical problems often need to – specify the component normal to the boundary, see Fig. [fig:II:vonNeuman] for an example. When this normal derivative is specified we speak of von Neumann boundary conditions.

In the case of an insulated (infinitely thin) rod of length \(a\), we can not have a heat-flux beyond the ends so that the gradient of the temperature must vanish (heat can only flow where a difference in temperature exists). This leads to the BC \[\pdx{u}(0,t) = \pdx{u}(a,t) = 0.\]

Mixed (Robin’s) boundary conditions

We can of course mix Dirichlet and von Neumann boundary conditions. For the thin rod example given above we could require \[u(0,t) + \pdx{u}(0,t) = u(a,t) + \pdx{u}(a,t) = 0.\]

Implicit boundary conditions

In many physical problems we have implicit boundary conditions, which just mean that we have certain conditions we wish to be satisfied. This is usually the case for systems defined on an infinite definition area. For the case of the Schrödinger equation this usually means that we require the wave function to be normalisable. We thus have to disallow the wave function blowing up at infinity. Sometimes we implicitly assume continuity or differentiability. In general one should be careful about such implicit BC’s, which may be extremely important

A slightly more realistic example

A string with fixed endpoints

Consider a string fixed at \(x=0\) and \(x=a\), as in Fig. [fig:II:string1]

A string with fixed endpoints.
A string with fixed endpoints.

It satisfies the wave equation \[\frac{1}{c^2} \pdtt{u} = \pdxx u,\qquad 0<x<a,\] with boundary conditions \[u(0,t) = u(a,t) = 0, \qquad t>0,\] and initial conditions, \[u(x,0) = f(x), \pdx{u}(x,0) = g(x).\]

A string with freely floating endpoints

Consider a string with ends fastened to air bearings that are fixed to a rod orthogonal to the \(x\)-axis. Since the bearings float freely there should be no force along the rods, which means that the string is horizontal at the bearings, see Fig. [fig:II:string2] for a sketch.

A string with floating endpoints.
A string with floating endpoints.

It satisfies the wave equation with the same initial conditions as above, but the boundary conditions now are \[\pdx u (0,t) = \pdx u (a,t) = 0, \qquad t>0.\] These are clearly of von Neumann type.

A string with endpoints fixed to strings

To illustrate mixed boundary conditions we make an even more complicated contraption where we fix the endpoints of the string to springs, with equilibrium at \(y=0\), see Fig. [fig:II:string3] for a sketch.

A string with endpoints fixed to springs.
A string with endpoints fixed to springs.

Hook’s law states that the force exerted by the spring (along the \(y\) axis) is \(F=-ku(0,t)\), where \(k\) is the spring constant. This must be balanced by the force the string on the spring, which is equal to the tension \(T\) in the string. The component parallel to the \(y\) axis is \(T\sin\alpha\), where \(\alpha\) is the angle with the horizontal, see Fig. [fig:II:string3b].

the balance of forces at one endpoint of the string of Fig. [fig:II:string3].
the balance of forces at one endpoint of the string of Fig. [fig:II:string3].

For small \(\alpha\) we have \(\sin\alpha \approx \tan\alpha = \pdx u(0,t)\). Since both forces should cancel we find \[\begin{aligned} {2} u(0,t) -\frac{T}{k} \pdx{u}(0,t) &= 0,&\qquad&t>0,\\ \intertext{and} u(a,t) -\frac{T}{k} \pdx{u}(a,t) &= 0,&&t>0.\end{aligned}\] These are mixed boundary conditions.