Skip to main content
Mathematics LibreTexts

5.1: Introduction to Fourier Series

  • Page ID
    106226
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    In this chapter we will look at trigonometric series. Previously, we saw that such series expansion occurred naturally in the solution of the heat equation and other boundary value problems. In the last chapter we saw that such functions could be viewed as a basis in an infinite dimensional vector space of functions. Given a function in that space, when will it have a representation as a trigonometric series? For what values of \(x\) will it converge? Finding such series is at the heart of Fourier, or spectral, analysis.

    There are many applications using spectral analysis. At the root of these studies is the belief that many continuous waveforms are comprised of a number of harmonics. Such ideas stretch back to the Pythagorean study of the vibrations of strings, which lead to their view of a world of harmony. This idea was carried further by Johannes Kepler in his harmony of the spheres approach to planetary orbits. In the 1700 's others worked on the superposition theory for vibrating waves on a stretched spring, starting with the wave equation and leading to the superposition of right and left traveling waves. This work was carried out by people such as John Wallis, Brook Taylor and Jean le Rond d'Alembert.

    In 1742 d'Alembert solved the wave equation

    \(c^{2} \dfrac{\partial^{2} y}{\partial x^{2}}-\dfrac{\partial^{2} y}{\partial t^{2}}=0\),

    where \(y\) is the string height and \(c\) is the wave speed. However, his solution led himself and others, like Leonhard Euler and Daniel Bernoulli, to investigate what "functions" could be the solutions of this equation. In fact, this lead to a more rigorous approach to the study of analysis by first coming to grips with the concept of a function. For example, in 1749 Euler sought the solution for a plucked string in which case the initial condition \(y(x, 0)=h(x)\) has a discontinuous derivative!

    In 1753 Daniel Bernoulli viewed the solutions as a superposition of simple vibrations, or harmonics. Such superpositions amounted to looking at solutions of the form

    \(y(x, t)=\sum_{k} a_{k} \sin \dfrac{k \pi x}{L} \cos \dfrac{k \pi c t}{L}\),

    where the string extends over the interval \([0, L]\) with fixed ends at \(x=0\) and \(x=L\). However, the initial conditions for such superpositions are

    \[y(x, 0)=\sum_{k} a_{k} \sin \dfrac{k \pi x}{L}. \nonumber \]

    It was determined that many functions could not be represented by a finite number of harmonics, even for the simply plucked string given by an initial condition of the form

    \(y(x, 0)=\left\{\begin{array}{cl}
    c x, & 0 \leq x \leq L / 2 \\
    c(L-x), & L / 2 \leq x \leq L
    \end{array}\right.\)

    Thus, the solution consists generally of an infinite series of trigonometric functions.

    Such series expansions were also of importance in Joseph Fourier's solution of the heat equation. The use of such Fourier expansions became an important tool in the solution of linear partial differential equations, such as the wave equation and the heat equation. As seen in the last chapter, using the Method of Separation of Variables, allows higher dimensional problems to be reduced to several one dimensional boundary value problems. However, these studies lead to very important questions, which in turn opened the doors to whole fields of analysis. Some of the problems raised were

    1. What functions can be represented as the sum of trigonometric functions?
    2. How can a function with discontinuous derivatives be represented by a sum of smooth functions, such as the above sums?
    3. Do such infinite sums of trigonometric functions a actually converge to the functions they represents?

    Sums over sinusoidal functions naturally occur in music and in studying sound waves. A pure note can be represented as

    \(y(t)=A \sin (2 \pi f t)\),

    where \(A\) is the amplitude, \(f\) is the frequency in hertz \((\mathrm{Hz})\), and \(t\) is time in seconds. The amplitude is related to the volume, or intensity, of the sound. The larger the amplitude, the louder the sound. In Figure 5.1 we show plots of two such tones with \(f=2 \mathrm{~Hz}\) in the top plot and \(f=5 \mathrm{~Hz}\) in the bottom one.

    Next, we consider what happens when we add several pure tones. After all, most of the sounds that we hear are in fact a combination of pure tones with

    Screen Shot 2022-07-05 at 12.21.48 AM.png
    Figure 5.1. Plots of \(y(t)=\sin (2 \pi f t)\) on \([0,5]\) for \(f=2 \mathrm{~Hz}\) and \(f=5 \mathrm{~Hz}\)

    different amplitudes and frequencies. In Figure 5.2 we see what happens when we add several sinusoids. Note that as one adds more and more tones with different characteristics, the resulting signal gets more complicated. However, we still have a function of time. In this chapter we will ask, "Given a function \(f(t)\), can we find a set of sinusoidal functions whose sum converges to \(f(t)\)?"

    Looking at the superpositions in Figure 5.2, we see that the sums yield functions that appear to be periodic. This is not to be unexpected. We recall that a periodic function is one in which the function values repeat over the domain of the function. The length of the smallest part of the domain which repeats is called the period. We can define this more precisely.

    Definition 5.1. 

    A function is said to be periodic with period \(T\) if \(f(t+T)=f(t)\) for all \(t\) and the smallest such positive number \(T\) is called the period.


    For example, we consider the functions used in Figure 5.2. We began with \(y(t)=2 \sin (4 \pi t)\). Recall from your first studies of trigonometric functions that one can determine the period by dividing the coefficient of \(t\) into \(2 \pi\) to get the period. In this case we have

    \[T=\dfrac{2 \pi}{4 \pi}=\dfrac{1}{2} \nonumber \]

    Looking at the top plot in Figure 5.1 we can verify this result. (You can count the full number of cycles in the graph and divide this into the total time to get a more accurate value of the period.)
    In general, if \(y(t)=A \sin (2 \pi f t)\), the period is found as

    \[T=\dfrac{2 \pi}{2 \pi f}=\dfrac{1}{f} \nonumber \]

    Screen Shot 2022-07-05 at 12.32.06 AM.png
    Figure 5.2. Superposition of several sinusoids. Top: Sum of signals with \(f=2 \mathrm{~Hz}\) and \(f=5 \mathrm{~Hz}\). Bottom: Sum of signals with \(f=2 \mathrm{~Hz}, f=5 \mathrm{~Hz}\), and and \(f=8 \mathrm{~Hz}\).

    Of course, this result makes sense, as the unit of frequency, the hertz, is also defined as \(s^{-1}\), or cycles per second.

    Returning to the superpositions in Figure 5.2, we have that \(y(t)= \sin (10 \pi t)\) has a period of \(0.2 \mathrm{~Hz}\) and \(y(t)=\sin (16 \pi t)\) has a period of \(0.125 \mathrm{~Hz}\). The two superpositions retain the largest period of the signals added, which is \(0.5 \mathrm{~Hz}\).

    Our goal will be to start with a function and then determine the amplitudes of the simple sinusoids needed to sum to that function. First of all, we will see that this might involve an infinite number of such terms. Thus, we will be studying an infinite series of sinusoidal functions.

    Secondly, we will find that using just sine functions will not be enough either. This is because we can add sinusoidal functions that do not necessarily peak at the same time. We will consider two signals that originate at different times. This is similar to when your music teacher would make sections of the class sing a song like "Row, Row, Row your Boat" starting at slightly different times.

    We can easily add shifted sine functions. In Figure 5.3 we show the functions \(y(t)=2 \sin (4 \pi t)\) and \(y(t)=2 \sin (4 \pi t+7 \pi / 8)\) and their sum. Note that this shifted sine function can be written as \(y(t)=2 \sin (4 \pi(t+7 / 32))\). Thus, this corresponds to a time shift of \(-7 / 8\).

    So, we should account for shifted sine functions in our general sum. Of course, we would then need to determine the unknown time shift as well as the amplitudes of the sinusoidal functions that make up our signal, \(f(t)\). While this is one approach that some researchers use to analyze signals, there is a more common approach. This results from another reworking of the shifted function. Consider the general shifted function

    \[y(t)=A \sin (2 \pi f t+\phi). \nonumber \]

    Note that \(2 \pi f t+\phi\) is called the phase of our sine function and \(\phi\) is called the phase shift. We can use our trigonometric identity for the sine of the sum of two angles to obtain

    \[y(t)=A \sin (2 \pi f t+\phi)=A \sin (\phi) \cos (2 \pi f t)+A \cos (\phi) \sin (2 \pi f t). \nonumber \]

    Defining \(a=A \sin (\phi)\) and \(b=A \cos (\phi)\), we can rewrite this as

    \[y(t)=a \cos (2 \pi f t)+b \sin (2 \pi f t) \nonumber \]

    Thus, we see that our signal is a sum of sine and cosine functions with the same frequency and different amplitudes. If we can find \(a\) and \(b\), then we can easily determine \(A\) and \(\phi\) :

    \[A=\sqrt{a^{2}+b^{2}} \quad \tan \phi=\dfrac{b}{a}. \nonumber \]

    Screen Shot 2022-07-05 at 12.54.19 AM.png
    Figure 5.3. Plot of the functions \(y(t)=2 \sin (4 \pi t)\) and \(y(t)=2 \sin (4 \pi t+7 \pi / 8)\) and their sum.

    We are now in a position to state our goal in this chapter.

    Goal

    Given a signal \(f(t)\), we would like to determine its frequency content by finding out what combinations of sines and cosines of varying frequencies and amplitudes will sum to the given function. This is called Fourier Analysis.


    This page titled 5.1: Introduction to Fourier Series is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?