# 4.1: Boundary value problems

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

### 4.1.1 Boundary value problems

Before we tackle the Fourier series, we need to study the so-called boundary value problems (or endpoint problems). For example, suppose we have

\[ x'' + \lambda x = 0,~~~ x(a)=0, ~~~ x(b)=0,\]

for some constant \( \lambda\), where \( x(t)\) is defined for \(t\) in the interval \([a,b]\). Unlike before, when we specified the value of the solution and its derivative at a single point, we now specify the value of the solution at two different points. Note that \( x=0\) is a solution to this equation, so existence of solutions is not an issue here. Uniqueness of solutions is another issue. The general solution to \( x''+\lambda x = 0\) has two arbitrary constants present. It is, therefore, natural (but wrong) to believe that requiring two conditions guarantees a unique solution.

Example \(\PageIndex{1}\):

Take \( \lambda = 1, a=0, b=\pi\). That is,

\[ x''+x=0, ~~~x(0)=0, ~~~ x(\pi)=0.\]

Then \( x= \sin t\) is another solution (besides \(x=0\)) satisfying both boundary conditions. There are more. Write down the general solution of the differential equation, which is \( x=A \cos t+B \sin t\). The condition \( x(0)=0\) forces \(A=0\). Letting \(x(\pi)=0\) does not give us any more information as \( x=B \sin t\) already satisfies both boundary conditions. Hence, there are infinitely many solutions of the form \(x= B \sin t\), where \(B\) is an arbitrary constant.

Example \(\PageIndex{2}\):

On the other hand, change to \(\lambda =2\).

\[ x''+2x=0, ~~~ x(0)=0, ~~~ x(\pi)=0.\]

Then the general solution is \(x=A \cos(\sqrt2 t)+B \sin(\sqrt2 t)\). Letting \(x(0)=0\) still forces \(A=0\). We apply the second condition to find \( 0=x(\pi)=B \sin(\sqrt2 t)\). As \( \sin(\sqrt2 t) \neq 0\) we obtain \(B=0\). Therefore \(x=0\) is the unique solution to this problem.

What is going on? We will be interested in finding which constants \(\lambda\) allow a nonzero solution, and we will be interested in finding those solutions. This problem is an analogue of finding eigenvalues and eigenvectors of matrices.

#### 4.1.2 Eigenvalue problems

For basic Fourier series theory we will need the following three eigenvalue problems. We will consider more general equations, but we will postpone this until chapter 5.

\[ x''+ \lambda x=0,~~~ x(a)=0, ~~~x(b)=0, \]

\[ x''+ \lambda x=0,~~~ x'(a)=0, ~~~x'(b)=0, \]

and

\[ x''+ \lambda x=0,~~~ x(a)=x(b), ~~~x'(a)=x'(b), \]

A number \(\lambda\) is called an eigenvalue of (4.1.4) (resp. (4.1.5) or (4.1.6)) if and only if there exists a nonzero (not identically zero) solution to (4.1.4) (resp. (4.1.5) or (4.1.6)) given that specific \(\lambda\). The nonzero solution we found is called the corresponding eigenfunction.

Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not just coincidental. If we think of the equations as differential operators, then we are doing the same exact thing. For example, let \(L=- \frac{d^2}{dt^2}\). We are looking for nonzero functions \(f\) satisfying certain endpoint conditions that solve \( (L- \lambda)f = 0\). A lot of the formalism from linear algebra can still apply here, though we will not pursue this line of reasoning too far.

Example \(\PageIndex{3}\):

Let us find the eigenvalues and eigenfunctions of

\[ x''+ \lambda x=0,~~~ x(0)=0, ~~~x(\pi)=0. \]

For reasons that will be clear from the computations, we will have to handle the cases \(\lambda > 0, \lambda=0, \lambda<0\) separately. First suppose that \(\lambda > 0\), then the general solution to \( x''+ \lambda x=0\) is

\[ x=A \cos(\sqrt{\lambda}t)+B \sin(\sqrt{\lambda}t). \]

The condition \( x(0)=0\) implies immediately \(A \). Next

\[ 0=x(\pi)=B \sin(\sqrt{\lambda} \pi).\]

If \(B\) is zero, then \(x\) is not a nonzero solution. So to get a nonzero solution we must have that \( \sin( \sqrt{\lambda} \pi)=0\). Hence, \( \sqrt{\lambda} \pi\) must be an integer multiple of \(\pi\). In other words, \(\sqrt{\lambda}=k \) for a positive integer \(k\). Hence the positive eigenvalues are \(k^2\) for all integers \(k \geq 1 \). The corresponding eigenfunctions can be taken as \( x =\sin(kt)\). Just like for eigenvectors, we get all the multiples of an eigenfunction, so we only need to pick one.

Now suppose that \(\lambda=0\). In this case the equation is \( x''=0\) and the general solution is \( x=At+B\). The condition \(x(0)=0\) implies that \( B=0\), and \( x(\pi)=0\) implies that \( A=0\). This means that \( \lambda = 0\) is not an eigenvalue.

Finally, suppose that \( \lambda <0\). In this case we have the general solution

\[ x= A \cosh( \sqrt{- \lambda}t) +B \sinh( \sqrt{- \lambda} t).\]

Letting \(x(0)=0\) implies that \(A=0\) (recall \( \cosh0 =1\) and \( \sinh0=0\)). So our solution must be \( x=B \sinh(\sqrt{- \lambda} t)\) and satisfy \(x(\pi)=0\). This is only possible if \(B\) is zero. Why? Because \(\sinh \xi\) is only zero when \(\xi=0\). You should plot \(\sinh\) to see this fact. We can also see this from the definition of sinh. We get \( 0 = \sinh t = \frac{e^t-e^{-t}}{2}\). Hence \(e^t=e^{-t}\), which implies \(t=-t\) and that is only true if \(t=0\). So there are no negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

\[ \lambda_k = k^2~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_k=\sin(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1.\]

Example \(\PageIndex{4}\):

Let us compute the eigenvalues and eigenfunctions of

\[ x''+ \lambda x=0,~~~ x'(0)=0,~~~x'(\pi)=0. \]

Again we will have to handle the cases \( \lambda > 0, \lambda=0, \lambda<0\) separately. First suppose that \( \lambda > 0\). The general solution to \( x'' + \lambda x=0\) is \( x=A \cos(\sqrt{\lambda}t)+ B \sin(\sqrt{\lambda}t)\). So

\[ x'=-A \sqrt{\lambda } \sin(\sqrt{\lambda }t)+B \sqrt{\lambda } \cos(\sqrt{\lambda }t).\]

The condition \( x'(0)=0\) implies immediately \( B=0\). Next

\[ 0=x'(\pi)=-A \sqrt{\lambda} \sin(\sqrt{\lambda} \pi).\]

Again \( A\) cannot be zero if \( \lambda\) is to be an eigenvalue, and \(\sin(\sqrt{\lambda} \pi)\) is only zero if \(\sqrt{\lambda}=k \) for a positive integer \(k\). Hence the positive eigenvalues are again \(k^2\) for all integers \(k \geq 1\). And the corresponding eigenfunctions can be taken as \( x= \cos(kt)\).

Now suppose that \( \lambda = 0\). In this case the equation is \( x''=0\) and the general solution is \( x=At +B\) so \(x'=A\). The condition \(x'(0)=0\) implies that \(A=0\). Now \(x'(\pi)=0\) also simply implies \(A=0\). This means that \(B\) could be anything (let us take it to be 1). So \(\lambda = 0\) is an eigenvalue and \(x=1\) is a corresponding eigenfunction.

Finally, let \( \lambda < 0\). In this case we have the general solution \( x=A \cosh(\sqrt{ - \lambda}t)+B \sinh(\sqrt{ - \lambda}t)\) and hence

\[ x' = A \sqrt{-\lambda} \sinh(\sqrt{ - \lambda}t) + B \sqrt{-\lambda} \cosh(\sqrt{ - \lambda}t). \]

We have already seen (with roles of \(A\) and \(B\) switched) that for this to be zero at \(t=0\) and \(t= \pi\) it implies that \( A=B=0\). Hence there are no negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

\[ \lambda_k = k^2~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_k=\cos(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1,\]

and there is another eigenvalue

\[ \lambda_0 = 0~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_0= 1.\]

The following problem is the one that leads to the general Fourier series.

Example \(\PageIndex{5}\):

Let us compute the eigenvalues and eigenfunctions of

\[ x''+ \lambda x=0,~~~~x(- \pi)=x(\pi),~~~~x'(- \pi)=x'(\pi).\]

Notice that we have not specified the values or the derivatives at the endpoints, but rather that they are the same at the beginning and at the end of the interval.

Let us skip \( \lambda < 0\). The computations are the same as before, and again we find that there are no negative eigenvalues.

For \( \lambda =0\), the general solution is \( x=At + B\). The condition \( x(- \pi)=x(\pi)\) implies that \( A=0 \) \( ( A \pi +B=-A \pi+B\) implies \(A=0)\). The second condition \( x'(- \pi)=x'( \pi)\) says nothing about \(B\) and hence \( \lambda = 0\) is an eigenvalue with a corresponding eigenfunction \( x=1\).

For \( \lambda >0\) we get that \( x = A \cos(\sqrt{ \lambda}t) + B \sin(\sqrt{ \lambda}t) \). Now

\[ A \cos( - \sqrt{\lambda} \pi)+ B \sin( - \sqrt{\lambda} \pi) = A \cos( \sqrt{\lambda} \pi)+ B \sin( \sqrt{\lambda} \pi).\]

We remember that \( \cos(- \theta)=\cos( \theta)\) and \( \sin(- \theta)= - \sin( \theta)\). Therefore,

\[ A \cos( \sqrt{\lambda} \pi)- B \sin( \sqrt{\lambda} \pi) = A \cos( \sqrt{\lambda} \pi)+ B \sin( \sqrt{\lambda} \pi). \]

Hence either \( B=0\) or \( \sin(\sqrt{\lambda} \pi)=0\). Similarly (exercise) if we differentiate \(x\) and plug in the second condition we find that \( A=0\) or \( \sin(\sqrt{\lambda} \pi)=0\). Therefore, unless we want \(A\) and \(B\) to both be zero (which we do not) we must have \( \sin(\sqrt{\lambda} \pi)=0\). Hence, \( \sqrt{\lambda}\) is an integer and the eigenvalues are yet again \( \lambda = k^2\) for an integer \( k \geq 1\). In this case, however, \( x=A \cos(kt)+ B \sin(kt)\) is an eigenfunction for any \( A\) and any \(B\). So we have two linearly independent eigenfunctions \(\sin(kt)\) and \(\cos(kt)\). Remember that for a matrix we could also have had two eigenvectors corresponding to a single eigenvalue if the eigenvalue was repeated.

In summary, the eigenvalues and corresponding eigenfunctions are

\[ \lambda_k = k^2~~~~ {\rm{~with~ the~ eigenfunctions~}} ~~~~\cos(kt)~~~~ {\rm{and}}~~~~ \sin(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1, \\ \lambda_0 = 0~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_0=1.\]

#### 4.1.3 Orthogonality of eigenfunctions

Something that will be very useful in the next section is the orthogonality property of the eigenfunctions. This is an analogue of the following fact about eigenvectors of a matrix. A matrix is called symmetric if \(A=A^T\). Eigenvectors for two distinct eigenvalues of a symmetric matrix are orthogonal. That symmetry is required. We will not prove this fact here. The differential operators we are dealing with act much like a symmetric matrix. We, therefore, get the following theorem.

**Theorem 4.1.1.** Suppose that \( x_1(t)\) and \( x_2(t)\) are two eigenfunctions of the problem (4.1.4), (4.1.5) or (4.1.6) for two different eigenvalues \(\lambda_1\) and \(\lambda_2\). Then they are orthogonal in the sense that

\[ \int^b_a x_1(t)x_2(t)dt=0.\]

Note that the terminology comes from the fact that the integral is a type of inner product. We will expand on this in the next section. The theorem has a very short, elegant, and illuminating proof so let us give it here. First note that we have the following two equations.

\[ x''_1 +\lambda_1x_1=0~~~~ {\rm{and}}~~~~ x''_2+\lambda_2x_2 = 0.\]

Multiply the first by \( x_2\) and the second by \( x_1\) and subtract to get

\[ (\lambda_1- \lambda_2)x_1x_2=x''_2x_1-x_2x''_1.\]

Now integrate both sides of the equation.

\[ (\lambda_1- \lambda_2) \int^b_a x_1x_2dt=\int^b_a x''_2x_1-x_2x''_1dt \\ = \int^b_a \frac{d}{dt} (x'_2x_1-x_2x'_1)dt \\ = [ x'_2x_1-x_2x'_1]^b_{t=a} = 0.\]

The last equality holds because of the boundary conditions. For example, if we consider (4.1.4) we have \( x_1(a)=x_1(b)=x_2(a)=x_2(b)=0\) and so \( x'_2x_1-x_2x'_1\) is zero at both \(a\) and \(b\). As \( \lambda_1 \neq \lambda_2\), the theorem follows.

Exercise \(\PageIndex{1}\):

(easy)**.** Finish the theorem (check the last equality in the proof) for the cases (4.1.5) and (4.1.6).

We have seen previously that \( \sin(nt)\) was an eigenfunction for the problem \( x''+ \lambda x=0, x(0)=0, x(\pi)=0\). Hence we have the integral

\[ \int^{\pi}_0 \sin(mt) \sin(nt)dt=0,~~~~{\rm{when}}~ m \neq n.\]

Similarly

\[ \int^{\pi}_0 \cos(mt) \cos(nt)dt=0,~~~~{\rm{when}}~ m \neq n.\]

And finally we also get

\[ \int^{\pi}_{- \pi} \sin(mt) \sin(nt)dt=0,~~~~{\rm{when}}~ m \neq n,\]

\[ \int^{\pi}_{- \pi} \cos(mt) \cos(nt)dt=0,~~~~{\rm{when}}~ m \neq n,\]

and

\[ \int^{\pi}_{- \pi} \cos(mt) \sin(nt)dt=0. \]

#### 4.1.4 Fredholm alternative

We now touch on a very useful theorem in the theory of differential equations. The theorem holds in a more general setting than we are going to state it, but for our purposes the following statement is sufficient. We will give a slightly more general version in chapter 5.

**Theorem 4.1.2** (Fredholm alternative)**.** Exactly one of the following statements holds. Either

\[ x'' + \lambda x=0,~~~~x(a)=0,~~~~x(b)=0 \]

has a nonzero solution, or

\[ x'' + \lambda x=f(t),~~~~x(a)=0,~~~~x(b)=0 \]

has a unique solution for every function \(f\) continuous on \([a,b]\).

The theorem is also true for the other types of boundary conditions we considered. The theorem means that if \( \lambda\) is not an eigenvalue, the nonhomogeneous equation (4.1.32) has a unique solution for every right hand side. On the other hand if \( \lambda\) is an eigenvalue, then (4.1.32) need not have a solution for every \(f\), and furthermore, even if it happens to have a solution, the solution is not unique.

We also want to reinforce the idea here that linear differential operators have much in common with matrices. So it is no surprise that there is a finite dimensional version of Fredholm alternative for matrices as well. Let \(A\) be an \(n \times n\) matrix. The Fredholm alternative then states that either \( (A- \lambda I) \vec{x}= \vec{0}\) has a nontrivial solution, or \( (A- \lambda I) \vec{x}= \vec{b}\) has a solution for every \( \vec{b}\).

A lot of intuition from linear algebra can be applied to linear differential operators, but one must be careful of course. For example, one difference we have already seen is that in general a differential operator will have infinitely many eigenvalues, while a matrix has only finitely many.

#### 4.1.5 Application

Let us consider a physical application of an endpoint problem. Suppose we have a tightly stretched quickly spinning elastic string or rope of uniform linear density \( \rho\). Let us put this problem into the \(xy-\)plane. The \(x\) axis represents the position on the string. The string rotates at angular velocity \( \omega\), so we will assume that the whole \(xy-\)plane rotates at angular velocity \( \omega\). We will assume that the string stays in this \(xy-\)plane and \(y\) will measure its deflection from the equilibrium position, \( y=0\), on the \(x\) axis. Hence, we will find a graph giving the shape of the string. We will idealize the string to have no volume to just be a mathematical curve. If we take a small segment and we look at the tension at the endpoints, we see that this force is tangential and we will assume that the magnitude is the same at both end points. Hence the magnitude is constant everywhere and we will call its magnitude \(T\). If we assume that the deflection is small, then we can use Newton’s second law to get an equation

\[ Ty''+ \rho \omega^2y=0.\]

Let \(L\) be the length of the string and the string is fixed at the beginning and end points. Hence, \( y(0)=0\) and \(y(L)=0\). See Figure 4.1.

We rewrite the equation as \( y''+ \frac{\rho \omega^2}{T}y=0\). The setup is similar to Example 4.1.3, except for the interval length being \( L\) instead of \( \pi\). We are looking for eigenvalues of \( y''+ \lambda y=0, y(0)=0, y(L)=0\) where \( \lambda =\frac{\rho \omega^2}{T} \). As before there are no nonpositive eigenvalues. With \(\lambda>0\), the general solution to the equation is \(y=A \cos(\sqrt{\lambda}x)+B \sin(\sqrt{\lambda}x)\). The condition \( y(0)=0\) implies that \(A=0\) as before. The condition \(y(L)=0\) implies that \( \sin(\sqrt{\lambda}L)=0\) and hence \( \sqrt{\lambda}L=k \pi\) for some integer \( k>0\), so

\[ \frac{\rho \omega^2}{T}= \lambda=\frac{k^2 \pi^2}{L^2}. \]

What does this say about the shape of the string? It says that for all parameters \( \rho, \omega, T\) not satisfying the above equation, the string is in the equilibrium position, \(y=0\). When \( \frac{\rho \omega^2}{T}= \frac{k^2 \pi^2}{L^2}\), then the string will “pop out” some distance \(B\) at the midpoint. We cannot compute \(B\) with the information we have.

Let us assume that \( \rho\) and \(T\) are fixed and we are changing \( \omega\). For most values of \( \omega\) the string is in the equilibrium state. When the angular velocity \( \omega\) hits a value \( \omega= \frac{k \pi \sqrt{T}}{L \sqrt{ \rho}}\), then the string will pop out and will have the shape of a sin wave crossing the \( x\) axis \( k\) times. When \( \omega\) changes again, the string returns to the equilibrium position. You can see that the higher the angular velocity the more times it crosses the \(x\) axis when it is popped out.

### Contributors

- Jiří Lebl (Oklahoma State University).These pages were supported by NSF grants DMS-0900885 and DMS-1362337.