Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

10.1: Temperature on a disk

  • Page ID
    8333
  •  

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Bessel functions and two-dimensional problems

    Temperature on a disk

    Let us now turn to a different two-dimensional problem. A circular disk is prepared in such a way that its initial temperature is radially symmetric, \[u(\rho,\phi,t=0) = f(\rho).\] Then it is placed between two perfect insulators and its circumference is connected to a freezer that keeps it at \(0^\circ~\rm C\), as sketched in Fig. [fig:Tspherical].

    A circular plate, insulated from above and below.

    A circular plate, insulated from above and below.

    Since the initial conditions do not depend on \(\phi\), we expect the solution to be radially symmetric as well, \(u(\rho,t)\), which satisfies the equation \[\begin{aligned} \pdt u &=& k\left[\pdrhor u + \frac{1}{\rho} \pdrho u \right],\nonumber\\ u(c,t) &=&0,\nonumber\\ u(\rho,0) &=& f(\rho).\end{aligned}\]

    The initial temperature in the disk.

    The initial temperature in the disk.

    Once again we separate variables, \(u(\rho,t)=R(\rho)T(t)\), which leads to the equation

    \[\frac{1}{k} \frac{T'}{T} = \frac{R''+\frac{1}{\rho}R'}{R} = -\lambda.\] This corresponds to the two equations

    \[\begin{aligned} \rho^2 R''+\rho R' + \lambda \rho^2 R &=& 0,\quad R(c)=0m\nonumber\\ T'+\lambda k T = 0.\end{aligned}\]

    The radial equation (which has a regular singular point at \(\rho=0\)) is closely related to one of the most important equation of mathematical physics, Bessel’s equation. This equation can be reached from the substitution \(\rho = x /\sqrt{\lambda}\), so that with \(R(r)=X(x)\) we get the equation

    \[x^2 \frac{d^2}{dx^2} X(x) + x \frac{d}{dx} X(x) + x^2 X(x) = 0, \qquad X(\sqrt \lambda c) =0.\]

    Bessel’s equation

    Bessel’s equation of order \(\nu\) is given by \[x^2 y'' + x y' + (x^2-\nu^2) y = 0.\] Clearly \(x=0\) is a regular singular point, so we can solve by Frobenius’ method. The indicial equation is obtained from the lowest power after the substitution \(y=x^\gamma\), and is

    \[\gamma^2-\nu^2=0\]

    So a generalized series solution gives two independent solutions if \(\nu \neq \half n\). Now let us solve the problem and explicitly substitute the power series,

    \[y = x^\nu \sum_n a_n x^n.\]

    From Bessel’s equation we find

    \[\sum_n(n+\nu)(n+\nu-1) a_\nu x^{m+\nu} +\sum_n(n+\nu)a_\nu x^{m+\nu} +\sum_n(x^2-\nu^2)a_\nu = 0\]

    which leads to

    \[[(m+\nu)^2-\nu^2] a_m= -a_{m-2}\] or \[a_m= -\frac{1}{m(m+2\nu)}a_{m-2}.\]

    If we take \(\nu=n>0\), we have

    \[a_m= -\frac{1}{m(m+2n)}a_{m-2}.\]

    This can be solved by iteration,

    \[\begin{aligned} a_{2k} &=& -\frac{1}{4}\frac{1}{k(k+n)}a_{2(k-1)}\nonumber\\ &=& \left(\frac{1}{4}\right)^2\frac{1}{k(k-1)(k+n)(k+n-1)}a_{2(k-2)} \nonumber\\ &=& \left(-\frac{1}{4}\right)^k\frac{n!}{k!(k+n)!}a_{0}.\end{aligned}\]

    f we choose1 \(a_0 = \frac{1}{n!2^n}\) we find the Bessel function of order \(n\)

    [J_n(x) = \sum_{k=0}^\infty \frac{(-1)^k}{k!(k+n)!} \left(\frac{x}{2}\right)^{2k+n}.\]

    There is another second independent solution (which should have a logarithm in it) with goes to infinity at \(x=0\).

    A plot of the first three Bessel functions \(J_n\) and \(Y_n\).

    A plot of the first three Bessel functions \(J_n\) and \(Y_n\).

    The general solution of Bessel’s equation of order \(n\) is a linear combination of \(J\) and \(Y\), \[y(x) = A J_n(x)+B Y_n(x).\]

    Gamma function

    For \(\nu\) not an integer the recursion relation for the Bessel function generates something very similar to factorials. These quantities are most easily expressed in something called a Gamma-function, defined as

    \[\Gamma(\nu) = \int_0^\infty e^{-t}t^{\nu-1} dt,\;\;\;\nu>0.\]

    Some special properties of \(\Gamma\) function now follow immediately:

    \[\begin{aligned} \Gamma(1) & = & \int_0^\infty e^{-t}dt = \left.-e^{-1}\right|^\infty_0 =1-e^{-\infty}=1\nonumber\\ \Gamma(\nu) & = & \int_0^\infty e^{-t} t^{\nu-1} dt = -\int_0^\infty \frac{de^{-t}}{dt} t^{\nu-1} dt\nonumber\\ &=& \left.- e^{-t} t^{\nu-1} \right|^\infty_0 +(\nu-1) \int_0^\infty e^{-t} t^{\nu-2} dt \end{aligned}\]

    he first term is zero, and we obtain \[\Gamma(\nu) = (\nu-1)\Gamma(\nu-1)\] From this we conclude that

    \[\Gamma(2)=1\cdot1=1,\;\Gamma(3)=2\cdot1\cdot1=2,\; \Gamma(4)=3\cdot2\cdot1\cdot1=2,\;\Gamma(n) = (n-1)!.\]

    hus for integer argument the \(\Gamma\) function is nothing but a factorial, but it also defined for other arguments. This is the sense in which \(\Gamma\) generalises the factorial to non-integer arguments. One should realize that once one knows the \(\Gamma\) function between the values of its argument of, say, 1 and 2, one can evaluate any value of the \(\Gamma\) function through recursion. Given that \(\Gamma(1.65) = 0.9001168163\) we find

    \[\Gamma(3.65) = 2.65\times1.65\times0.9001168163 = 3.935760779 .\]

    Evaluate \(\Gamma(3)\), \(\Gamma(11)\), \(\Gamma(2.65)\).
    \(2!=2\), \(10!=3628800\), \(1.65\times0.9001168163=1.485192746\).
    We also would like to determine the \(\Gamma\) function for \(\nu<1\). One can invert the recursion relation to read \[\Gamma(\nu-1) = \frac{\Gamma(\nu)}{\nu-1},\] \(\Gamma(0.7) = \Gamma(1.7)/0.7=0.909/0.7=1.30\).

    What is \(\Gamma(\nu)\) for \(\nu<0\)? Let us repeat the recursion derived above and find \[\Gamma(-1.3) = \frac{\Gamma(-0.3)}{-1.3} = \frac{\Gamma(0.7)}{-1.3\times -0.3} = \frac{\Gamma(1.7)}{0.7\times-0.3\times -1.3}=3.33\;.\] This works for any value of the argument that is not an integer. If the argument is integer we get into problems. Look at \(\Gamma(0)\). For small positive \(\epsilon\) \[\Gamma(\pm\epsilon)= \frac{\Gamma(1\pm\epsilon)}{\pm\epsilon} =\pm \frac{1}{\epsilon}\rightarrow \pm \infty.\] Thus \(\Gamma(n)\) is not defined for \(n\leq0\). This can be easily seen in the graph of the \(\Gamma\) function, Fig. [fig:Gamma].

    A graph of the \(\Gamma\) function (solid line). The inverse \(1/\Gamma\) is also included (dashed line). Note that this last function is not discontinuous.

    A graph of the \(\Gamma\) function (solid line). The inverse \(1/\Gamma\) is also included (dashed line). Note that this last function is not discontinuous.

    Finally, in physical problems one often uses \(\Gamma(1/2)\), \[\Gamma(\half) = \int_0^\infty e^{-t}t^{-1/2} dt = 2\int_0^\infty e^{-t}dt^{1/2} = 2\int_0^\infty e^{-x^2}dx.\] This can be evaluated by a very smart trick, we first evaluate \(\Gamma(1/2)^2\) using polar coordinates \[\begin{aligned} \Gamma(\half)^2 &=& 4 \int_0^\infty e^{-x^2}dx\int_0^\infty e^{-y^2}dy \nonumber\\ &=& 4 \int_0^\infty \int_0^{\pi/2} e^{-\rho^2}\rho d\rho d\phi = \pi.\end{aligned}\] (See the discussion of polar coordinates in Sec. [sec:polc].) We thus find \[\Gamma(1/2)=\sqrt{\pi},\;\;\Gamma(3/2)=\half\sqrt{\pi},\;\;{\rm etc.}\]

    Bessel functions of general order

    The recurrence relation for the Bessel function of general order \(\pm\nu\) can now be solved by using the gamma function, \[a_{m} = -\frac{1}{m(m\pm 2\nu)} a_{m-2}\] has the solutions (\(x > 0\)) \[\begin{aligned} J_{\nu}(x) &=& \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(\nu+k+1)} \left(\frac{x}{2}\right)^{\nu+2k}, \\ J_{-\nu}(x) &=& \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(-\nu+k+1)} \left(\frac{x}{2}\right)^{-\nu+2k}.\end{aligned}\] The general solution to Bessel’s equation of order \(\nu\) is thus \[y(x) = A J_{\nu}(x)+BJ_{-\nu}(x),\] for any non-integer value of \(\nu\). This also holds for half-integer values (no logs).

    Properties of Bessel functions

    Bessel functions have many interesting properties: \[\begin{aligned} J_{0}(0) &= 1,\\ J_{\nu}(x) &= 0\quad\text{(if $\nu>0$),}\\ J_{-n}(x) &= (-1)^{n }J_{n}(x),\\ \frac{d}{dx} \left[x^{-\nu}J_{\nu}(x) \right] &= -x^{-\nu}J_{\nu+1}(x),\\ \frac{d}{dx} \left[x^{\nu}J_{\nu}(x) \right] &= x^{\nu}J_{\nu-1}(x),\\ \frac{d}{dx} \left[J_{\nu}(x) \right] &=\half\left[J_{\nu-1}(x)-J_{\nu+1}(x)\right],\\ x J_{\nu+1}(x) &= 2 \nu J_{\nu}(x) -x J_{\nu-1}(x),\\ \int x^{-\nu}J_{\nu+1}(x)\,dx &= -x^{-\nu}J_{\nu}(x)+C,\\ \int x^{\nu}J_{\nu-1}(x)\,dx &= x^{\nu}J_{\nu}(x)+C.\end{aligned}\]

    Let me prove a few of these. First notice from the definition that \(J_{n}(x)\) is even or odd if \(n\) is even or odd,

    \[J_{n}(x) = \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!(n+k)!} \left(\frac{x}{2}\right)^{n+2k}.\]

    Substituting \(x=0\) in the definition of the Bessel function gives \(0\) if \(\nu >0\), since in that case we have the sum of positive powers of \(0\), which are all equally zero.

    Let’s look at \(J_{-n}\):

    \[\begin{aligned} J_{-n}(x) &=& \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(-n+k+1)!} \left(\frac{x}{2}\right)^{n+2k}\nonumber\\ &=& \sum_{k=n}^{\infty}\frac{(-1)^{k}}{k!\Gamma(-n+k+1)!} \left(\frac{x}{2}\right)^{-n+2k}\nonumber\\ &=& \sum_{l=0}^{\infty}\frac{(-1)^{l+n}}{(l+n)!l!} \left(\frac{x}{2}\right)^{n+2l}\nonumber\\ &=& (-1)^{n} J_{n}(x).\end{aligned}\]

    Here we have used the fact that since \(\Gamma(-l) = \pm \infty\), \(1/\Gamma(-l) = 0\) [this can also be proven by defining a recurrence relation for \(1/\Gamma(l)\)]. Furthermore we changed summation variables to \(l=-n+k\).

    The next one:

    \[\begin{aligned} \frac{d}{dx} \left[x^{-\nu}J_{\nu}(x) \right] &=& 2^{-\nu}\frac{d}{dx} \left\{ \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!\Gamma(\nu+k+1)} \left(\frac{x}{2}\right)^{2k} \right\} \nonumber\\&=& 2^{-\nu} \sum_{k=1}^{\infty}\frac{(-1)^{k}}{(k-1)!\Gamma(\nu+k+1)} \left(\frac{x}{2}\right)^{2k-1} \nonumber\\&=& -2^{-\nu} \sum_{l=0}^{\infty}\frac{(-1)^{l}}{(l)!\Gamma(\nu+l+2)} \left(\frac{x}{2}\right)^{2l+1} \nonumber\\&=& -2^{-\nu} \sum_{l=0}^{\infty}\frac{(-1)^{l}}{(l)!\Gamma(\nu+1+l+1)} \left(\frac{x}{2}\right)^{2l+1} \nonumber\\&=& -x^{-\nu} \sum_{l=0}^{\infty}\frac{(-1)^{l}}{(l)!\Gamma(\nu+1+l+1)} \left(\frac{x}{2}\right)^{2l+\nu+1} \nonumber\\&=& -x^{-\nu}J_{\nu+1}(x).\end{aligned}\] Similarly \[\begin{aligned} \frac{d}{dx} \left[x^{\nu}J_{\nu}(x) \right] &=&x^{\nu}J_{\nu-1}(x).\end{aligned}\]

    The next relation can be obtained by evaluating the derivatives in the two equations above, and solving for \(J_{\nu}(x)\):

    \[\begin{aligned} x^{-\nu}J'_{\nu}(x)-\nu x^{-\nu-1}J_{\nu}(x)&=& -x^{-\nu}J_{\nu+1}(x),\\ x^{\nu}J_{\nu}(x)+\nu x^{\nu-1}J_{\nu}(x)&=&x^{\nu}J_{\nu-1}(x).\end{aligned}\]

    ultiply the first equation by \(x^{\nu}\) and the second one by \(x^{-\nu}\) and add:

    \[\begin{aligned} -2\nu \frac{1}{x}J_{\nu}(x) = -J_{\nu+1}(x)+J_{\nu-1}(x).\end{aligned}\]

    After rearrangement of terms this leads to the desired expression.

    Eliminating \(J_{\nu}\) between the equations gives (same multiplication, take difference instead) \[\begin{aligned} 2 J'_{\nu}(x) &=&J_{\nu+1}(x)+J_{\nu-1}(x).\end{aligned}\]

    Integrating the differential relations leads to the integral relations.

    Bessel function are an inexhaustible subject – there are always more useful properties than one knows. In mathematical physics one often uses specialist books.

    Sturm-Liouville theory

    In the end we shall want to write a solution to an equation as a series of Bessel functions. In order to do that we shall need to understand about orthogonality of Bessel function – just as sines and cosines were orthogonal. This is most easily done by developing a mathematical tool called Sturm-Liouville theory. It starts from an equation in the so-called self-adjoint form

    \[[r(x) y'(x)]' + [p(x)+\lambda s(x)] y(x) = 0 \label{eq:selfadj}\]

    where \(\lambda\) is a number, and \(r(x)\) and \(s(x)\) are greater than 0 on \([a,b]\). We apply the boundary conditions

    \[\begin{aligned} a_1 y(a)+ a_2 y'(a)&=&0,\nonumber\\ b_1 y(b)+ b_2 y'(b)&=&0,\end{aligned}\]

    with \(a_1\) and \(a_2\) not both zero, and \(b_1\) and \(b_2\) similar.

    If there is a solution to (\ref{eq:selfadj}) then \(\lambda\) is real.

    Assume that \(\lambda\) is a complex number (\(\lambda = \alpha + i \beta\)) with solution \(\Phi\). By complex conjugation we find that

    \[\begin{aligned} [r(x) \Phi'(x)]' + [p(x)+\lambda s(x)] \Phi(x) &=& 0\nonumber\\ {}[r(x) (\Phi^*)'(x)]' + [p(x)+\lambda^* s(x)] (\Phi^*)(x) &=& 0\end{aligned}\]

    where \(*\) note complex conjugation. Multiply the first equation by \(\Phi^*(x)\) and the second by \(\Phi(x)\), and subtract the two equations: \[(\lambda^*-\lambda)s(x) \Phi^*(x)\Phi(x)=\Phi(x)[r(x) (\Phi^*)'(x)]'-\Phi^*(x)[r(x) \Phi'(x)]'.\] Now integrate over \(x\) from \(a\) to \(b\) and find

    \[(\lambda^*-\lambda)\int_a^bs(x) \Phi^*(x)\Phi(x)\,dx = \int_a^b\Phi(x)[r(x) (\Phi^*)'(x)]'-\Phi^*(x)[r(x) \Phi'(x)]'\,dx\]

    The second part can be integrated by parts, and we find

    \[\begin{aligned} (\lambda^*-\lambda)\int_a^b s(x) \Phi^*(x)\Phi(x)\,dx &=& \left[\Phi'(x) r(x) (\Phi^*)'(x)-\Phi^*(x)r(x) \Phi'(x)\right|^b_a \nonumber\\ &=& r(b)\left[\Phi'(b) (\Phi^*)'(b)-\Phi^*(b)\Phi'(b)\right] \nonumber\\&& -r(a)\left[\Phi'(a) (\Phi^*)'(a)-\Phi^*(a)\Phi'(a)\right] \nonumber\\ &=&0,\end{aligned}\]

    here the last step can be done using the boundary conditions. Since both \(\Phi^*(x)\Phi(x)\) and \(s(x)\) are greater than zero we conclude that \(\int_a^bs(x) \Phi^*(x)\Phi(x)\,dx > 0\), which can now be divided out of the equation to lead to \(\lambda=\lambda^*\).

    Let \(\Phi_n\) and \(\Phi_m\) be two solutions for different values of \(\lambda\), \(\lambda_n\neq \lambda_m\), then \[\int_a^b s(x) \Phi_n(x) \Phi_m(x)\,dx = 0.\]

    The proof is to a large extent identical to the one above: multiply the equation for \(\Phi_n(x)\) by \(\Phi_m(x)\) and vice-versa. Subtract and find \[(\lambda_n-\lambda_m)\int_a^b s(x) \Phi_m(x)\Phi_n(x)\,dx = 0\] which leads us to conclude that \[\int_a^b s(x) \Phi_n(x) \Phi_m(x)\,dx = 0.\]

    Under the conditions set out above

    1. There exists a real infinite set of eigenvalues \(\lambda_0,\ldots,\lambda_n, \ldots\) with \(\lim_{n\rightarrow \infty} = \infty\).
    2. If \(\Phi_n\) is the eigenfunction corresponding to \(\lambda_n\), it has exactly \(n\) zeroes in \([a,b]\). No proof shall be given.

    Clearly the Bessel equation is of self-adjoint form: rewrite \[x^2 y'' + xy' + (x^2-\nu^2) y = 0\] as (divide by \(x\)) \[[x y']' + (x-\frac{\nu^2}{x}) y = 0\] We cannot identify \(\nu\) with \(\lambda\), and we do not have positive weight functions. It can be proven from properties of the equation that the Bessel functions have an infinite number of zeroes on the interval \([0,\infty)\). A small list of these:

    \[\begin{array}{lclllll} J_0 &:& 2.42&5.52 &8.65&11.79&\ldots\\ J_{1/2} &:& \pi & 2 \pi & 3\pi & 4\pi & \ldots\\ J_{8} &:& 11.20 & 16.04 & 19.60 & 22.90 & \ldots \end{array}\]

    Our initial problem and Bessel functions

    We started the discussion from the problem of the temperature on a circular disk, solved in polar coordinates, Since the initial conditions do not depend on \(\phi\), we expect the solution to be radially symmetric as well, \(u(\rho,t)\), which satisfies the equation

    \[\begin{aligned} \pdt u &=& k\left[\pdrhor u + \frac{1}{\rho} \pdrho u \right],\nonumber\\ u(c,t) &=&0,\nonumber\\ u(\rho,0) &=& f(\rho).\end{aligned}\]

    With \(u(\rho,t)=R(\rho)T(t)\) we found the equations

    \[\begin{aligned} \rho^2 R''+\rho R' + \lambda \rho^2 R &=& 0\;\;\;\;R(c)=0\nonumber\\ T'+\lambda k T = 0.\end{aligned}\]

    The equation for \(R\) is clearly self-adjoint, it can be written as

    \[[\rho R']' + \lambda \rho R = 0\] So how does the equation for \(R\) relate to Bessel’s equation? Let us make the change of variables \(x= \sqrt{\lambda} \rho\). We find \[\frac{d}{d\rho} = \sqrt{\lambda} \frac{d}{dx},\] and we can remove a common factor \(\sqrt{\lambda}\) to obtain (\(X(x)=R(\rho)\))

    \[[x X']' + x X = 0,\]

    which is Bessel’s equation of order \(0\), i.e.,

    \[R(\rho) = J_0(\rho \sqrt{\lambda}).\]

    The boundary condition \(R(c)=0\) shows that \[c \sqrt{\lambda_n} = x_n,\]

    where \(x_n\) are the points where \(J_0(x)=0\). We thus conclude \[R_n(\rho) = J_0(\rho \sqrt{\lambda_n}).\] the first five solutions \(R_n\) (for \(c=1\)) are shown in Fig. [fig:Rn].

    A graph of the first five functions R_n
    A graph of the first five functions \(R_n\)

    From Sturm-Liouville theory we conclude that \[\int_0^\infty \rho d\rho \,R_n(\rho)R_m(\rho)=0\;\;{\rm if\ }n \neq m.\] Together with the solution for the \(T\) equation, \[T_n(t)= \exp(-\lambda_n k t)\] we find a Fourier-Bessel series type solution \[u(\rho,t) = \sum_{n=1}^\infty A_n J_0(\rho\sqrt{\lambda_n})\exp(-\lambda_n k t),\] with \(\lambda_n= (x_n/c)^2\).

    In order to understand how to determine the coefficients \(A_n\) from the initial condition \(u(\rho,0)=f(\rho)\) we need to study Fourier-Bessel series in a little more detail.

    Fourier-Bessel series

    So how can we determine in general the coefficients in the Fourier-Bessel series

    \[f(\rho) = \sum_{j=1}^\infty C_j J_\nu(\alpha_j \rho)?\]

    The corresponding self-adjoint version of Bessel’s equation is easily found to be (with \(R_j(\rho) = J_\nu(\alpha_j\rho)\))

    \[(\rho R_j')' + (\alpha_j^2\rho - \frac{\nu^2}{\rho}) R_j = 0.\]

    Where we assume that \(f\) and \(R\) satisfy the boundary condition

    \[\begin{aligned} b_1 f(c) + b_2 f'(c) &=& 0 \nonumber\\ b_1 R_j(c) + b_2 R_j'(c) & = & 0 \end{aligned}\]

    From Sturm-Liouville theory we do know that \[\int_0^c \rho J_\nu(\alpha_i \rho) J_\nu(\alpha_j \rho) = 0 \;\;{\rm if\ }i \neq j,\] but we shall also need the values when \(i=j\)!

    Let us use the self-adjoint form of the equation, and multiply with \(2 \rho R'\), and integrate over \(\rho\) from \(0\) to \(c\),

    \[\int_0^c \left[(\rho R_j')' + (\alpha_j^2\rho - \frac{\nu^2}{\rho}) R_j \right]2 \rho R_j' d\rho = 0.\]

    This can be brought to the form (integrate the first term by parts, bring the other two terms to the right-hand side)

    \[\begin{aligned} \int_0^c \frac{d}{d\rho}\left(\rho R_j'\right)^2 d\rho &=& 2 \nu^2 \int_0^c R_j R_j' d\rho -2\alpha_j^2 \int_0^c \rho^2 R_j R_j' d\rho\\ \left.\left(\rho R_j'\right)^2\right|^c_0 &=& \left.\nu^2 R_j^2\right|^c_0 - 2\alpha_j^2 \int_0^c \rho^2 R_j R_j' d\rho.\end{aligned}\]

    The last integral can now be done by parts: \[\begin{aligned} 2 \int_0^c\rho^2 R_j R_j' d\rho &=& -2 \int_0^c\rho R_j^2 d\rho +\left. \rho R_j^2 \right|^c_0.\end{aligned}\]

    So we finally conclude that

    \[2 \alpha_j^2 \int_0^c\rho R_j^2 d\rho = \left[\left(\alpha_j^2\rho^2 - \nu^2 \right) R_j^2+\left(\rho R_j'\right)^2 \right|^c_0.\]

    In order to make life not too complicated we shall only look at boundary conditions where \(f(c)=R(c)=0\). The other cases (mixed or purely \(f'(c)=0\)) go very similar! Using the fact that \(R_j(r) = J_\nu(\alpha_j \rho)\), we find \[R'_j=\alpha_j J'_\nu(\alpha_j \rho).\] We conclude that

    \[\begin{aligned} 2 \alpha_j^2 \int_0^c\rho^2 R_j^2 d\rho &=& \left[ \left(\rho \alpha_j J'_\nu(\alpha_j\rho)\right)^2 \right|^c_0 \nonumber\\ &=& {c^2}\alpha_j^2 \left(J'_\nu(\alpha_j c)\right)^2 \nonumber\\ &=& {c^2}\alpha_j^2 \left(\frac{\nu}{\alpha_j c} J_\nu(\alpha_j c) -J_{\nu+1}(\alpha_j c)\right)^2 \nonumber\\ &=& {c^2}\alpha_j^2 \left( J_{\nu+1}(\alpha_j c)\right)^2 ,\end{aligned}\]

    We thus finally have the result \[\int_0^c\rho^2 R_j^2 d\rho=\frac{c^2}{2} J^2_{\nu+1}(\alpha_j c).\]

    Consider the function

    \[f(x) = \left\{\begin{array}{ll} x^3 &\;\;0<x<10\\ 0&\;\;x>10 \end{array}\right.\]

    Expand this function in a Fourier-Bessel series using \(J_3\). From our definitions we find that

    \[f(x) = \sum_{j=1}^\infty A_j J_3(\alpha_j x),\]

    with

    \[\begin{aligned} A_j &=& \frac{2}{100 J_4(10\alpha_j)^2} \int_0^{10} x^3 J_3(\alpha_j x) dx\nonumber\\ &=&\frac{2}{100 J_4(10\alpha_j)^2} \frac{1}{\alpha^5_j} \int_0^{10\alpha_j} s^4 J_3(s) ds\nonumber\\ &=&\frac{2}{100 J_4(10\alpha_j)^2} \frac{1}{\alpha^5_j} (10 \alpha_j)^4 J_4(10\alpha_j ) ds\nonumber\\ &=& \frac{200}{\alpha_j J_4(10 \alpha_j)}.\end{aligned}\]

    sing \(\alpha_j=\ldots\), we find that the first five values of \(A_j\) are \(1050.95,-821.503,703.991,-627.577,572.301\). The first five partial sums are plotted in Fig. [fig:xcube].

    A graph of the first five partial sums for x^3 expressed in J_3.
    A graph of the first five partial sums for \(x^3\) expressed in \(J_3\).

    Back to our initial problem

    After all that we have learned, we know that in order to determine the solution of the initial problem in Sec. 1 we would have to calculate the integrals \[A_j = \frac{2}{c^2 J_1^2(c \alpha_j)}\int_0^c f(\rho) J_0(\alpha_j \rho) \rho d\rho\]


    1. This can be done since Bessel’s equation is linear, i.e., if \(g(x)\) is a solution \(C g(x)\) is also a solution.