You surely know the Taylor expansion, with which we can approximate a function \(f(x)\) at a point \(x = x_0\) by a simpler Taylor series. The more terms we take in the Taylor series, the better the approximation \( f_{\text{taylor}} \) becomes in the neighborhood of the chosen point \(x_0\).

As you can see in Illustration 1, the Taylor series represented by \( f_{\text{taylor}} \) is a good approximation of the function \(f\) in the immediate neighborhood of \(x_0\). However, if we move further away from the point, we see that the Taylor series is not a good approximation there. Taylor expansion is thus a method with which we can approximate a function only locally.

But if we care about approximating a function on an entire interval, then we need a Fourier series of the function. As we will see, the Fourier series is a linear combination of simple periodic basis functions such as cosine and sine or complex exponentials, which in sum can approximate the function \(f\) within an interval. In the following we assume an interval of length \(L\).

The concept of Fourier series

We can represent a vector \( \boldsymbol{v} \) living in a \(n\)-dimensional vector space as a linear combination of basis vectors \( \boldsymbol{e}_k \) spanning the vector space:

You should know this from linear algebra! By the way, the symbol \(\boxed{+}\) is used on this website as a modern, much better alternative for the sum symbol \(\Sigma\).

Using a basis \({ \boldsymbol{e}_k }\) we can represent every possible vector \( \boldsymbol{v} \) in this vector space. Here \(v_k\) are the components of the vector in the chosen basis. The components are not unique. By choosing another basis the vector has other components \(v_k\).

We can also apply this concept of linear combination to infinite-dimensional vectors. A function \(f(x)\) can be interpreted as an infinite-dimensional vector, which we can represent as a linear combination. The components \(v_k\) of a finite vector become Fourier coefficients \(\widetilde{f}_k\) if we represent a function rather than a finite vector as a linear combination:

If we represent a function \(f\) as a linear combination as Fourier series of \(f\).

Given a linear combination for a function, the basis vectors \(\boldsymbol{e}_k\) are called basis functions. In optics, the basis functions are also often called Fourier modes.

We think of the function values \(f(x_0)\), \(f(x_1)\), \(f(x_2)\) and so on, up to \(f(x_0 + L)\) as components of \(f\). Here we cheated a bit because the argument \(x\) is a real number and there are theoretically infinitely many values between, for example, \(x_0\) and \(x_1\). But this way you can at least imagine a function as a vector with infinitely many components:

You can determine the Fourier coefficients in the same way as you determine the vector components in linear algebra. How does that work again in linear algebra? To get the \(k\)th component of a finite-dimensional vector \(\boldsymbol{v}\), we need to form the scalar product between the \(k\)th basis vector \(\boldsymbol{e}_k\) and the vector \(\boldsymbol{v}\):

We have written the scalar product a little more compactly in the last step with a sum sign. Here we sum over the index \(j\).

If we do not work with finite-dimensional vectors but with functions, then we have to form the scalar product between the \(k\)-th basis function and the function \(f\) to get the \(k\)-th Fourier coefficient of \(f\):

k-th Fourier coefficient as the inner product of the k-th basis vector with the function

Formula anchor$$ \begin{align} \widetilde{f}_k ~=~ \langle \boldsymbol{e}_k | f \rangle \end{align} $$

To indicate that we are working here with possibly an infinite dimensional vector space, we call the operation 4 not scalar product but inner product and write it in Bra-Ket notation (as physicists).

Let us first write out the inner product analogously to the finite-dimensional case 4. With the only difference that we interpret the components 3 of the function as function values: \(f(x_0)\), \(f(x_1)\), \(f(x_2)\) and so on. We also consider the components of the basis functions to be function values:

Inner product as the sum of the products of the function values

Here we sum up to the point \(x=x_0+L\) because, as said before, with a Fourier series we can only work with functions in a chosen interval. But don't take this summation too serous, because for now we just want to motivate the formula for the inner product for functions.

We can compactly write down the summation over the function argument with a sum sign:

Inner product compactly noted as the sum of the products of the function values

Formula anchor$$ \begin{align} \widetilde{f}_k ~&=~ \langle \boldsymbol{e}_k | f \rangle \\\\
~&=~ \underset{x~=~x_0}{\overset{x_0+L}{\boxed{+}}} ~ \boldsymbol{e}_k(x) \, f(x) \end{align} $$

And here we have a problem: how should the summation in 7 work for a function at all? The summation index \(x\) is a continuous variable in the case of functions! That means: Even between the points \(x_0\) and \(x_1\) there are theoretically infinitely many other function values, which we have simply omitted here. We can easily solve the problem. Because we are dealing with a continuous summation here, we simply have to replace the sum sign with an integral:

Formula for the k-th Fourier coefficient

Formula anchor$$ \begin{align} \widetilde{f}_k ~&=~ \langle \boldsymbol{e}_k | f \rangle \\\\
~&=~ \int_{x_0}^{x_0+L} \text{d}x \, \boldsymbol{e}^*_k(x) \, f(x) \end{align} $$

We also need to complex-conjugate the first argument (here: basis function) in the inner product in the intergral. This is necessary if we also allow complex-valued functions \(f: \mathbb{R} \rightarrow \mathbb{C} \). If we did not do this, the integral would not be an inner product in the case of complex-valued functions because it would not satisfy the mathematical properties of an inner product.

So, hopefully now you understand how to determine the Fourier coefficients and how the formula 8 is obtained in the first place. You simply have to form the inner product of the function \(f\) with the basis functions. That is: You have to calculate the integral 8.

Fourier basis

Let's move on to the basis functions. What basis functions can we use for the Fourier series 2? All those that satisfy the properties of a basis! For us to call a set of vectors, or as in our case, a set of functions a basis, they must satisfy two conditions:

If we take two basis functions, then they must be orthonormal to each other, that is, orthogonal and normalized.

The set of basis functions must be complete. In other words, they must span the space in which the functions \(f\) live. Only then we are able to represent each function \(f\) as a linear combination of these basis functions.

A typical basis used in physics is a set of complex exponential functions:

Complex exponentials as basis for the Fourier series

The factor \(\frac{1}{\sqrt{L}}\) ensures that the basis function is normalized.

For each different wavenumber \(k\) you get a different basis vector. Maybe you saw a different basis for the Fourier series, like cosine and sine. As I said we are free to choose a basis. Here we choose complex exponential functions as a basis, because they can be written compactly, especially for explaining the Fourier series.

The Fourier series 2 in this exponential basis would then look like this:

Fourier series in the exponential basis

Formula anchor$$ \begin{align} f ~=~ \frac{1}{\sqrt{L}} ~ \underset{k}{\boxed{+}} ~ \widetilde{f}_k \, \text{e}^{\mathrm{i}\, k\, x} \end{align} $$

Property #1: Fourier basis is orthonormal

Let's look at the first property of basis functions: Orthonormality. We can check orthonormality by taking two different basis functions, \(\boldsymbol{e}_k\) and \(\boldsymbol{e}_{k'}\), and forming the inner product, as in 8.

For the two functions to be orthonormal, their inner product must yield \(\langle \boldsymbol{e}_k | \boldsymbol{e}_{k'} \rangle = 1\) if \(k = k'\) (that is, if we take the inner product of a function with itself).

And the inner product \(\langle \boldsymbol{e}_k | \boldsymbol{e}_{k'} \rangle = 0\) must be zero if \(k\neq k'\) is (that is, if we do NOT take the inner product of a function with itself).

We can combine the two cases into one equation if we use a Kronecker delta:

Here we combined the two normalization factors to \(\frac{1}{L}\) and complex-conjugated the first exponential function (hence the minus sign in the exponent).

Here you also see why the factor \( \frac{1}{\sqrt{L}} \) is necessary for the basis functions: So that we get a 1 for the inner product 15, as it should be for orthonormal vectors or functions.

For the case \(k \neq k'\), that is, for two different basis functions, the integral 14 must yield zero. Integrating the exponential function returns the exponential function, and a factor in front of it:

In order for the two exponential functions in the parenthesis to be equal, we must assume periodic boundary conditions. That means the exponential function at the point \(x_0\) must be equal to the exponential function at the point \(x_0 + L\). Under this condition, the two terms in the parenthesis cancel out and the integral is zero.

As you can see, the chosen basis functions 9, together with periodic boundary conditions are orthonormal.

Property #2: Fourier basis is complete

The second property that the set of functions must satisfy to be a basis is the completeness relation. With this relation we ensure that we can represent any function \(f(x)\), using the chosen basis \( { e_k(x) } \).

Insert the formula for the Fourier coefficients 8 into the Fourier series 2:

Derivation of the completeness relation for the Fourier basis

The sum over the two basis functions together with \(f(x')\) in the integral, picks the value of the function \(f(x)\). This behavior is exhibited by the delta function \(\delta(x-x')\).

Completeness relation for the Fourier basis

Formula anchor$$ \begin{align} \underset{k}{\boxed{+}}~ \boldsymbol{e}^*_k(x') \, \boldsymbol{e}_k(x) ~=~\delta(x-x') \end{align} $$

Example: Fourier series for the sawtooth function

Now you should have a solid, intuitive understanding of a Fourier series and how to theoretically calculate it for a function. Let's do a concrete example of how we can specify a Fourier series for a function.

Let's look at a sawtooth function between \(x=0\) and \(x=1\). It is defined as follows:

Since we have not inserted a concrete value for \(k\), we have determined ALL Fourier coefficients with it. For a different \(k\) value we get a different Fourier coefficient. Let's insert the Fourier coefficients into the Fourier series and combine the two exponential functions:

In the last step we used \( k = \frac{2\pi}{L} \, n \), with \( L = 1 \). Note that we sum over negative and positive \( n \) here!

With this Fourier series for the sawtooth function, we basically gained two things.

We can sum the series only up to a certain maximum \(n\) value \(n_{\text{max}} \) and thus obtain an arbitrarily good approximation for the sawtooth function.

Since we have determined the Fourier coefficients, we know which \(n\) values are contained in the sawtooth function (\(n=0\) is not contained, for example) . So we know which building blocks (basis functions) the sawtooth function is composed of. This "breaking down of the function into individual components" is called Fourier analysis.

In the next lessons, we'll look at how we can use Fourier series to make Fourier analysis of complicated functions. Another thing we will learn is how we can use a so-called Fourier transform to approximate a function \(f\) not only in an interval but in the whole space (\(L \rightarrow \infty \)), globally.

+ Perfect for high school and undergraduate physics students + Contains over 500 illustrated formulas on just 140 pages + Contains tables with examples and measured constants + Easy for everyone because without vectors and integrals