$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 4.1: Taylor Series

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

# Fourier Series

In this chapter we shall discuss Fourier series. These infinite series occur in many different areas of physics, in electromagnetic theory, electronics, wave phenomena and many others. They have some similarity to – but are very different from – the Taylor’s series you have encountered before.

## Taylor series

One series you have encountered before is Taylor’s series, $f(x) = \sum_{n=0}^{\infty} f^{(n)}(a)\frac{(x-a)^n}{n!}, \label{eq:IV:taylor}$ where $$f^{(n)}(x)$$ is the $$n$$th derivative of $$f$$. An example is the Taylor series of the cosine around $$x=0$$ (i.e., $$a=0$$), \begin{aligned} {3} &&\qquad&\cos(0) &= 1,\nonumber\\ \cos'(x) &= -\sin(x),&&\cos'(0)&=0,\nonumber\\ \cos^{(2)}(x) &= -\cos(x),&&\cos^{(2)}(0)&=-1,\\ \cos^{(3)}(x) &= \sin(x),&&\cos^{(3)}(0)&=0,\nonumber\\ \cos^{(4)}(x) &= \cos(x),&&\cos^{(4)}(0)&=1.\nonumber\end{aligned} Notice that after four steps we are back where we started. We have thus found (using $$m=2n$$ in ([eq:IV:taylor])) ) $\cos x = \sum_{m=0}^\infty \frac{(-1)^m}{(2m)!} x^{2m},$ Show that $\sin x = \sum_{m=0}^\infty \frac{(-1)^m}{(2m+1)!} x^{2m+1}.$

## Introduction to Fourier Series

Rather than Taylor series, that are supposed to work for “any” function, we shall study periodic functions. For periodic functions the French mathematician introduced a series in terms of sines and cosines, $f(x) = \frac{a_0}{2} + \sum_{n=1} [a_n \cos(nx)+b_n\sin(nx)].$ We shall study how and when a function can be described by a Fourier series. One of the very important differences with Taylor series is that they can be used to approximate non-continuous functions as well as continuous ones.

## Periodic functions

We first need to define a periodic function. A function is called periodic with period $$p$$ if $$f(x+p)=f(x)$$, for all $$x$$, even if $$f$$ is not defined everywhere. A simple example is the function $$f(x) = \sin(bx)$$ which is periodic with period $$(2\pi)/b$$. Of course it is also periodic with periodic $$(4\pi)/b$$. In general a function with period $$p$$ is periodic with period $$2p,3p,\ldots$$. This can easily be seen using the definition of periodicity, which subtracts $$p$$ from the argument $f(x+3p)=f(x+2p)=f(x+p)=f(x).$The smallest positive value of $$p$$ for which $$f$$ is periodic is called the (primitive) period of $$f$$.

What is the primitive period of $$\sin(4x)$$?

$$\pi/2$$.

## Orthogonality and normalisation

Consider the series $\frac{a_0}{2} + \sum_{n=1} [a_n \cos\left(\frac{n\pi x}{L}\right)+ b_n\sin\left(\frac{n\pi x}{L}\right)],\qquad-L \leq x \leq L.$ This is called a trigonometric series. If the series approximates a function $$f$$ (as will be discussed) it is called a Fourier series and $$a$$ and $$b$$ are the Fourier coefficients of $$f$$.

In order for all of this to make sense we first study the functions $\{1,\cos\left(\frac{n\pi x}{L}\right),\sin\left(\frac{n\pi x}{L}\right)\} ,\qquad n=1,2,\ldots\quad,$ and especially their properties under integration. We find that \begin{aligned} \int_{-L}^{L} 1 \cdot 1\, dx &=& 2L,\\ \int_{-L}^{L} 1 \cdot \cos\left(\frac{n\pi x}{L}\right)\, dx &=& 0,\\ \int_{-L}^{L} 1 \cdot \sin\left(\frac{n\pi x}{L}\right)\, dx &=& 0,\\ \int_{-L}^{L} \cos\left(\frac{m\pi x}{L}\right) \cdot \cos\left(\frac{n\pi x}{L}\right)\, dx &=& \half \int_{-L}^{L}\cos\left(\frac{(m+n)\pi x}{L}\right)+\cos\left(\frac{(m-n)\pi x}{L}\right)\, dx \nonumber\\ &=& \begin{cases}0&\text{if n \neq m}\\L&\text{if n=m}\end{cases}\quad,\\ \int_{-L}^{L} \sin\left(\frac{m\pi x}{L}\right) \cdot \sin\left(\frac{n\pi x}{L}\right)\, dx &=& \half \int_{-L}^{L}-\cos\left(\frac{(m+n)\pi x}{L}\right)+\cos\left(\frac{(m-n)\pi x}{L}\right)\, dx \nonumber\\ &=& \begin{cases}0&\text{if n \neq m}\\L&\text{if n=m}\end{cases}\quad,\\ \int_{-L}^{L} \cos\left(\frac{m\pi x}{L}\right) \cdot \sin\left(\frac{n\pi x}{L}\right) dx &=& \half \int_{-L}^{L}\sin\left(\frac{(m+n)\pi x}{L}\right)+\sin\left(\frac{(m-n)\pi x}{L}\right) dx \nonumber\\&=& 0.\end{aligned} If we consider these integrals as some kind of inner product between functions (like the standard vector inner product) we see that we could call these functions orthogonal. This is indeed standard practice, where for functions the general definition of inner product takes the form $(f,g) = \int_a^b w(x) f(x) g(x)\, dx.$ If this is zero we say that the functions $$f$$ and $$g$$ are orthogonal on the interval $$[a,b]$$ with weight function $$w$$. If this function is $$1$$, as is the case for the trigonometric functions, we just say that the functions are orthogonal on $$[a,b]$$.

The norm of a function is now defined as the square root of the inner-product of a function with itself (again, as in the case of vectors), $||f|| = \sqrt{\int_a^b w(x) f(x)^2 dx}.$If we define a normalised form of $$f$$ (like a unit vector) as $$f /||f||$$, we have $|| (f /||f||) || = \sqrt{\frac{\int_a^b w(x) f(x)^2 dx}{||f||^2}} = \frac{\sqrt{\int_a^b w(x) f(x)^2 dx}}{||f||} = \frac{||f||}{||f||} = 1.$ What is the normalised form of $$\{1,\cos\left(\frac{n\pi x}{L}\right),\sin\left(\frac{n\pi x}{L}\right)\}$$?

$$\{1/\sqrt{2L}, (1/\sqrt{L})\cos\left(\frac{n\pi x}{L}\right), (1/\sqrt{L})\sin\left(\frac{n\pi x}{L}\right)\}$$.

A set of mutually orthogonal functions that are all normalised is called an orthonormal set.

## When is it a Fourier series?

The series discussed before are only useful is we can associate a function with them. How can we do that?

Lets us assume that the periodic function $$f(x)$$ has a Fourier series representation (exchange the summation and integration, and use orthogonality), $f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \left[a_n \cos\left(\frac{n\pi x}{L}\right)+ b_n\sin\left(\frac{n\pi x}{L}\right)\right].$ We can now use the orthogonality of the trigonometric functions to find that \begin{aligned} \frac{1}{L}\int_{-L}^L f(x) \cdot 1 dx &=& a_0, \\ \frac{1}{L}\int_{-L}^L f(x) \cdot \cos\left(\frac{n\pi x}{L}\right) dx &=& a_n, \\ \frac{1}{L}\int_{-L}^L f(x) \cdot \sin\left(\frac{n\pi x}{L}\right) dx &=& b_n .\end{aligned} This defines the Fourier coefficients for a given $$f(x)$$. If these coefficients all exist we have defined a Fourier series, about whose convergence we shall talk in a later lecture.

An important property of Fourier series is given in Parseval’s lemma: $\int_{-L}^L f(x)^2 dx = \frac{L a_0^2}{2} + L \sum_{n=1}^\infty (a_n^2+b_n^2).$ This looks like a triviality, until one realises what we have done: we have once again interchanged an infinite summation and an integration. There are many cases where such an interchange fails, and actually it make a strong statement about the orthogonal set when it holds. This property is usually referred to as completeness. We shall only discuss complete sets in these lectures.

Now let us study an example. We consider a square wave (this example will return a few times) $f(x) = \begin{cases} -3& \text{ if -5+10n<x<10n} \\ 3& \text{ if 10n<x<5+10n} \end{cases} \quad, \label{eq:sqw}$ where $$n$$ is an integer, as sketched in Fig. [fig:III:sqw].

This function is not defined at $$x=5n$$. We easily see that $$L=5$$. The Fourier coefficients are \begin{aligned} a_0 &=& \frac{1}{5} \int_{-5}^0 -3 dx + \frac{1}{5} \int^{5}_0 3 dx = 0 \nonumber\\ a_n &=& \frac{1}{5} \int_{-5}^0 -3 \cos\left(\frac{n\pi x}{5}\right) +\frac{1}{5} \int_0^5 3 \cos\left(\frac{n\pi x}{5}\right) = 0\\ b_n &=& \frac{1}{5} \int_{-5}^0 -3 \sin\left(\frac{n\pi x}{5}\right) +\frac{1}{5} \int_0^5 3 \sin\left(\frac{n\pi x}{5}\right) \nonumber\\ &=& \left.\frac{3}{n\pi}\cos\left(\frac{n\pi x}{5}\right)\right|^0_{-5} -\left.\frac{3}{n\pi}\cos\left(\frac{n\pi x}{5}\right)\right|^5_{0} \nonumber\\&=& \frac{6}{n\pi}[1-\cos(n\pi)] = \begin{cases} \frac{12}{n\pi} &\text{if n odd}\\ 0 &\text{if n even} \end{cases} \nonumber\end{aligned} And thus ($$n=2m+1$$) $f(x) = \frac{12}{\pi}\sum_{m=0} \frac{1}{2m+1} \sin\left(\frac{(2m+1)\pi x}{5}\right).$What happens if we apply Parseval’s theorem to this series?

We find $\int_{-5}^5 9 dx = 5 \frac{144}{\pi^2} \sum_{m=0}^\infty\left(\frac{1}{2m+1}\right)^2$Which can be used to show that $\sum_{m=0}^\infty\left(\frac{1}{2m+1}\right)^2 = \frac{\pi^2}{8}.$

## Fourier series for even and odd functions

Notice that in the Fourier series of the square wave ([eq:sqw]) all coefficients $$a_n$$ vanish, the series only contains sines. This is a very general phenomenon for so-called even and odd functions.

A function is called even if $$f(-x)=f(x)$$, e.g. $$\cos(x)$$.
A function is called odd if $$f(-x)=-f(x)$$, e.g. $$\sin(x)$$.

These have somewhat different properties than the even and odd numbers:

1. The sum of two even functions is even, and of two odd ones odd.

2. The product of two even or two odd functions is even.

3. The product of an even and an odd function is odd.

even: d, f; odd: a, b, c, e.
Now if we look at a Fourier series, the Fourier cosine series $f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos\frac{n\pi}{L}x$ describes an even function (why?), and the Fourier sine series $f(x) = \sum_{n=1}^\infty b_n \sin\frac{n\pi}{L}x$ an odd function. These series are interesting by themselves, but play an especially important rôle for functions defined on half the Fourier interval, i.e., on $$[0,L]$$ instead of $$[-L,L]$$. There are three possible ways to define a Fourier series in this way, see Fig. [fig:IV:even-odd]

1. Continue $$f$$ as an even function, so that $$f'(0)=0$$.

2. Continue $$f$$ as an odd function, so that $$f(0)=0$$.

3. Neither of the two above. We now nothing about $$f$$ at $$x=0$$.

Of course these all lead to different Fourier series, that represent the same function on $$[0,L]$$. The usefulness of even and odd Fourier series is related to the imposition of boundary conditions. A Fourier cosine series has $$df/dx = 0$$ at $$x=0$$, and the Fourier sine series has $$f(x=0)=0$$. Let me check the first of these statements: $\frac{d}{dx} \left[\frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos\frac{n\pi}{L}x \right] = -\frac{\pi}{L}\sum_{n=1}^\infty n a_n \sin\frac{n\pi}{L}x =0\quad\text{at x=0.}$

As an example look at the function $$f(x) = 1-x$$, $$0 \leq x \leq 1$$, with an even continuation on the interval $$[-1,1]$$. We find \begin{aligned} a_0 & = & \frac{2}{1} \int_0^1 (1-x) dx = 1 \nonumber\\ a_n &=& 2 \int_0^1 (1-x) \cos n\pi x dx\nonumber\\ &=& \left.\left\{ \frac{2}{n\pi} \sin n\pi x - \frac{2}{n^2\pi^2} [\cos n\pi x + n \pi x \sin n\pi x] \right\} \right|_0^1 \nonumber\\&=& \begin{cases} 0 & \text{if n even}\\ \frac{4}{n^2\pi^2}&\text{if n is odd} \end{cases}\quad.\end{aligned} So, changing variables by defining $$n=2m+1$$ so that in a sum over all $$m$$ $$n$$ runs over all odd numbers, $f(x) = \half + \frac{4}{\pi^2}\sum_{m=0}^{\infty} \frac{1}{(2m+1)^2} \cos(2m+1)\pi x.$

## Convergence of Fourier series

The final subject we shall consider is the convergence of Fourier series. I shall show two examples, closely linked, but with radically different behaviour.

1. A square wave,

 $$f(x)= 1$$ for $$-\pi < x < 0$$; $$f(x)= -1$$ for $$0 < x < \pi$$.
2. a triangular wave,

 $$g(x)= \pi/2+x$$ for $$-\pi < x < 0$$; $$g(x)= \pi/2-x$$ for $$0 < x < \pi$$.

Note that $$f$$ is the derivative of $$g$$.

It is not very hard to find the relevant Fourier series, \begin{aligned} f(x) & = & -\frac{4}{\pi} \sum_{m=0}^\infty \frac{1}{2m+1} \sin (2m+1) x,\\ g(x) & = & \frac{4}{\pi} \sum_{m=0}^\infty \frac{1}{(2m+1)^2} \cos (2m+1) x.\end{aligned} Let us compare the partial sums, where we let the sum in the Fourier series run from $$m=0$$ to $$m=M$$ instead of $$m=0\ldots\infty$$. We note a marked difference between the two cases. The convergence of the Fourier series of $$g$$ is uneventful, and after a few steps it is hard to see a difference between the partial sums, as well as between the partial sums and $$g$$. For $$f$$, the square wave, we see a surprising result: Even though the approximation gets better and better in the (flat) middle, there is a finite (and constant!) overshoot near the jump. The area of this overshoot becomes smaller and smaller as we increase $$M$$. This is called the Gibbs phenomenon (after its discoverer). It can be shown that for any function with a discontinuity such an effect is present, and that the size of the overshoot only depends on the size of the discontinuity! A final, slightly more interesting version of this picture, is shown in Fig. [fig:IV:gibss3d].