One way to solve \(\eqref{eq:1}\) is to decompose \(f(t)\) as a sum of cosines (and sines) and then solve many problems of the form \(\eqref{eq:2}\). We then use the principle of superposition, to sum up all the solutions we got to get a solution to \(\eqref{eq:1}\).
Before we proceed, let us talk a little bit more in detail about periodic functions. A function is said to be periodic with period \(P\) if \( f(t)\) for all \(t\). For brevity we will say \( f(t)\) is \(P-\)periodic. Note that a \(P-\)periodic function is also \(2P-\)periodic, \(3P-\)periodic and so on. For example, \( \cos(t)\) and \( \sin(t)\) are \( 2 \pi -\)periodic. So are \( \cos(kt)\) and \( \sin(kt)\) for all integers \( k\). The constant functions are an extreme example. They are periodic for any period (exercise).
Normally we will start with a function \(f(t)\) defined on some interval \( [-L, L]\) and we will want to extend \(f(t)\) periodicallyto make it a \( 2L-\)periodic function. We do this extension by defining a new function \(F(t)\) such that for \(t\) in\( [-L, L]\), \( F(t)=f(t)\). For \(t\) in \( [L, 3L]\), we define \( F(t)=f(t-2L)\), for \(t\) in \( [-3L, -L]\), \( F(t)=f(t+2L)\), and so on. We assumed that \( f(-L)=f(L)\). We could have also started with \(f\) defined only on the half-open interval \( (-L, L]\) and then define \( f(-L)=f(L)\).
Example \(\PageIndex{1}\)
Define \( f(t)=1-t^2\) on \([-1, 1]\). Now extend \(f(t)\) periodically to a \(2\)-periodic function. See Figure \(\PageIndex{1}\) on the facing page.
You should be careful to distinguish between \( f(t)\) and its extension. A common mistake is to assume that a formula for \( f(t)\) holds for its extension. It can be confusing when the formula for \( f(t)\) is periodic, but with perhaps a different period.
Exercise \(\PageIndex{1}\)
Define \( f(t)= \cos t\) on \([\dfrac{ - \pi}{2}, \dfrac{ \pi}{2} ]\). Take the \(\pi -\)periodic extension and sketch itsgraph. How does it compare to the graph of \( \cos t\)?
Inner Product and Eigenvector Decomposition
Suppose we have a symmetric matrix, that is \( A^T=A\). We have said before that the eigenvectors of \( A\) are then orthogonal. Here the word orthogonalmeans that if \( \vec{v}\) and \( \vec{w}\) are two distinct (and not multiples of each other) eigenvectors of \( A\), then \( \left \langle \vec{v}, \vec{w} \right \rangle=0\). In this case the inner product \(\left \langle \vec{v}, \vec{w} \right \rangle\) is the dot product, which can be computed as \( \vec{v}^T \vec{w}\).
To decompose a vector \( \vec{v}\) in terms of mutually orthogonal vectors \( \vec{w}_1\) and \( \vec{w}_2\) we write
Instead of decomposing a vector in terms of eigenvectors of a matrix, we will decompose a function in terms of eigenfunctions of a certain eigenvalue problem. The eigenvalue problem we will use for the Fourier series is
We have previously computed that the eigenfunctions are \(1, \cos(kt), \sin(kt)\). That is, we will want to find a representation of a \( 2 \pi -\)periodic function \( f(t)\) as
This series is called the Fourier series\(^{1}\) or the trigonometric series for \(f(t)\). We write the coefficient of the eigenfunction \(1\) as \( \dfrac{a_0}{2}\) for convenience. We could also think of \( 1= \cos(0t)\), so that we only need to look at \( \cos(kt)\) and \( \sin(kt)\).
As for matrices we want to find a projectionof \(f(t)\) onto the subspace generated by the eigenfunctions. So we will want to define an inner product of functions. For example, to find \( a_n\) we want to compute \( \left \langle f(t), \cos(nt) \right \rangle \). We define the inner product as
With this definition of the inner product, we have seen in the previous section that the eigenfunctions \( \cos(kt)\)(including the constant eigenfunction), and \( \sin(kt)\) are orthogonalin the sense that
\[\begin{align}\begin{aligned} \langle \, \cos (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\ \langle \, \sin (mt)\, , \, \sin (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\ \langle \, \sin (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for all } m \text{ and } n .\end{aligned}\end{align} \nonumber \]
By elementary calculus for \( n=1,2,3, \ldots .\) we have \( \left \langle \cos(nt), \cos(nt) \right \rangle = \pi\) and \( \left \langle \sin(nt), \sin(nt) \right \rangle = \pi\). For the constant we get
We will often use the result from calculus that says that the integral of an odd function over a symmetric interval is zero. Recall that an odd function is a function \( \varphi(t)\) such that \( \varphi(-t) = - \varphi(t)\). For example the functions \( t, \sin t\), or (importantly for us) \( t \cos(nt)\) are all odd functions. Thus
\[ a_n=\frac{1}{\pi} \int^\pi_{-\pi} t \cos(nt)dt=0. \nonumber \]
Let us move to \( b_n\). Another useful fact from calculus is that the integral of an even function over a symmetric interval is twice the integral of the same function over half the interval. Recall an even function is a function \(\varphi(t)\) such that \( \varphi(-t) = \varphi(t)\). For example \( t \sin(nt)\) is even.
Extend \(f(t)\) periodically and write it as a Fourier series. This function or its variants appear often in applications and the function is called the square wave.
The plot of the extended periodic function is given in Figure \(\PageIndex{4}\). Now we compute the coefficients. Let us start with \(a_0\)
is only an equality for such \(t\) where \(f(t)\) is continuous. That is, we do not get an equality for \(t= - \pi, 0, \pi\) and all the other discontinuities of \(f(t)\). It is not hard to see that when \(t\) is an integer multiple of \(\pi\) (which includes all the discontinuities), then
and extend periodically. The series equals this extended \(f(t)\) everywhere, including the discontinuities. We will generally not worry about changing the function values at several (finitely many) points.
We will say more about convergence in the next section. Let us however mention briefly an effect of the discontinuity. Let us zoom in near the discontinuity in the square wave. Further, let us plot the first 100 harmonics, see Figure \(\PageIndex{6}\). You will notice that while the series is a very good approximation away from the discontinuities, the error (the overshoot) near the discontinuity at \( t= \pi\) does not seem to be getting any smaller. This behavior is known as the Gibbs phenomenon. The region where the error is large does get smaller, however, the more terms in the series we take.
We can think of a periodic function as a “signal” being a superposition of many signals of pure frequency. For example, we could think of the square wave as a tone of certain base frequency. This base frequency is called the fundamental frequency. The square wave will be a superposition of many different pure tones of frequencies that are multiples of the fundamental frequency. In music, the higher frequencies are called the overtones. All the frequencies that appear are called the spectrum of the signal. On the other hand a simple sine wave is only the pure tone (no overtones). The simplest way to make sound using a computer is the square wave, and the sound is very different from a pure tone. If you ever played video games from the 1980s or so, then you heard what square waves sound like.