Trace Formulas

A trace formula is an equation that relates two kinds of data – “spectral” data related to representations (or eigenvalues of certain operators), and “geometric” data, related to integrals along “orbits” on some space.

The name “trace formula” comes from how this equation is obtained – by expanding the “trace” of a certain operator (let’s call it R_{f}. It will depend on a compactly supported “test function” f(x) on a topological group G) on square-integrable functions on a compact quotient \Gamma\backslash G of G (which give a representation of G by translation) by a discrete subgroup \Gamma.

The operator R_{f} takes a function \phi(x) on the group G, translates it by some element y (recall for example that acting on functions by translation is how we defined the representation of the group \mathbb{R} in Representation Theory and Fourier Analysis), multiplies it by the test function f(x), then integrates over the group G (the group G must have a measure called “Haar measure” to do this) to obtain a new function (R_{f}\phi)(y):

\displaystyle (R_{f}\phi)(y)=\int_{G}\phi(xy)f(x)dx

We can also express this as

\displaystyle (R_{f}\phi)(y)=\int_{G}\phi(x)f(y^{-1}x)dx

Let \Gamma be a discrete subgroup of G, such that the quotient \Gamma\backslash G is compact (this will turn out to be important later). Instead of integrating over all of G we may instead integrate over the quotient \Gamma\backslash G by re-expressing the integrand as follows:

\displaystyle (R_{f}\phi)(y)=\int_{\Gamma\backslash G}\phi(x)\sum_{\gamma\in\Gamma}f(y^{-1}\gamma x)dx

The sum \sum_{\gamma\in\Gamma}f(y^{-1}\gamma x) is called the “kernel” of the operator R_{f} and is denoted by K(x,y). We have

\displaystyle (R_{f}\phi)(y)=\int_{\Gamma\backslash G}K(x,y)\phi(x)dx

So the operator R_{f} looks like the integral of K(x,y)\phi(x)dx over the quotient \Gamma\backslash G. Compare this with how a matrix with entries A_{mn} acts on a finite dimensional vector v_{n}:

\displaystyle v_{m}=\sum_{n}A_{mn}v_{n}

Note that we think of integrals as analogous to sums for infinite dimensions, as functions are analogous to vectors in infinite dimensions. Now we can see that the kernel K(x,y) is the analogue of the entries of some matrix!

The “trace” of a matrix is just the sum of its diagonal entries, i.e. the sum of A_{nn} for all n. Therefore, the trace of the operator defined above is the integral of K(x,x) (i.e. we set x=y) over \Gamma\backslash G.

\displaystyle \mathrm{tr}(R_{f})=\int_{\Gamma\backslash G} K(x,x)dx

Now recall that the kernel K(x,y) is given by the sum \sum_{\gamma\in\Gamma}f(y^{-1}\gamma x). Therefore the trace will be given by

\displaystyle \mathrm{tr}(R_{f})=\int_{\Gamma\backslash G}\sum_{\gamma\in\Gamma}f(x^{-1}\gamma x)dx.

Some analysis manipulations will allow us to re-express the trace as the sum

\displaystyle \mathrm{tr}(R_{f})=\sum_{\gamma\in\lbrace \Gamma\rbrace}\mathrm{vol}(\Gamma_{\gamma}\backslash G_{\gamma})\int_{G_{\gamma}\backslash G} f(x^{-1}\gamma x)dx

over representatives \gamma of conjugacy classes in \Gamma of the integrals of f(x^{-1}\gamma x) over the quotient G_{\gamma}\backslash G where G_{\gamma} is the centralizer of \gamma in G, multiplied by some factor called the “volume” of \Gamma_{\gamma}\backslash G_{\gamma}.

The integral of f(x^{-1}\gamma x) over G_{\gamma}\backslash G is called an “orbital integral“. This expansion of the trace is going to be the “geometric side” of the trace formula.

We consider another way to expand the trace. Recall that to define the operator R_f we needed to act by translation. In this case that the quotient \Gamma\backslash G is compact, as we stated earlier, this representation (let us call it R) by translation decomposes into a direct sum of irreducible representations \pi, with multiplicities m(\pi,R). So we decompose first before getting the trace!

This other expansion is called the “spectral side“. Since we have now expanded the same thing, the trace, in two ways, we can equate the two expansions:

\displaystyle \sum_{\gamma\in\lbrace \Gamma\rbrace}\mathrm{vol}(\Gamma_{\gamma}\backslash G_{\gamma})\int_{G_{\gamma}\backslash G}f(x^{-1}hx)dx=\sum_{\pi} m(\pi,R)\mathrm{tr}(\int_{G}f(x)\pi(x)dx)

This equation is what is called the “trace formula”. Let us test it out for G=\mathbb{R}, H=\mathbb{Z}, like in Representation Theory and Fourier Analysis.

In the geometric side, f(x^{-1}\gamma x)=f(\gamma), since \mathbb{R} is abelian. \mathbb{Z} is also abelian, so the conjugacy classes are just elements of \mathbb{Z}. We have G_{\gamma}=G and \Gamma_{\gamma}=\Gamma. One can check that the volume is 1 and the orbital integral is just f(\gamma). Replacing \gamma by n for notational convenience, we see that the geometric side is just a sum of f(n) over each integer n in \mathbb{Z}.

Let us now look at the spectral side. Recall that the representation decomposes into irreducible representations, each with multiplicity 1, which are given by multiplication by e^{2 \pi i k x}. We consider the operator R_{f} now.

Recall that we let our representation act, then multiply it with the test function f, then integrate. We broke it up into irreducible representations, which act by multiplication by e^{2 \pi i k x}. What is multiplication of a function of the form e^{2 \pi i k x} and integrating over x?

This is just the Fourier transform of the test function f! Since we have an irreducible representation for every integer k, we sum over those. So we have an equality between the sum of f(n) where n is an integer, and the corresponding sum of its Fourier transforms!

This is actually a classical result in Fourier analysis known as Poisson summation:

\displaystyle \sum_{n\in\mathbb{Z}}f(n)=\sum_{k\in\mathbb{Z}}\int_{\mathbb{R}} e^{2\pi i k x}f(x)dx

Atle Selberg famously applied the trace formula to the representation of G=\mathrm{SL}_{2}(\mathbb{R}) on functions on a double quotient H\backslash\mathrm{SL}_2(\mathbb{R})/\mathrm{SO}(2). Note that the quotient \mathrm{SL}_2(\mathbb{R})/\mathrm{SO}(2) is the upper half-plane. H is chosen by Selberg so that the double quotient is a Riemann surface of genus g\geq 2.

Selberg used the trace formula to relate lengths of geodesics (given by orbital integrals) to eigenvalues of the 2D Laplacian. Note that the Laplacian already appears in our example of Poisson summation, because e^{2 \pi i k x} is also an eigenfunction of the 1D Laplacian.

This may be why the spectral side is called “spectral”. The trace formula is fascinating on its own, but very commonly used with it is to study representations of certain groups via more familiar representations of other groups.

To do this, note that the spectral side contains information related to representations. If we could only somehow find a way to relate the geometric sides of trace formulas of two different representations, then we can relate their spectral sides!

This is an approach to the part of representation theory known as Langlands functoriality, which studies how representations are related given that the respective groups have “Langlands duals” that are related. Relating the geometric sides involves proving difficult theorems such as “smooth transfer” and the “fundamental lemma”.

Finally, it is worth noting that the spectral side is also used to study special values of L-functions. This is inspired by the work of Hecke expressing completed L-functions as Mellin transforms of modular forms. But that is for another time!

References:

Arthur-Selberg trace formula on Wikipedia

Poisson summation formula on Wikipedia

An introduction to the trace formula by James Arthur

Selberg’s trace formula: an introduction by Jens Marklof

Advertisement

Representation Theory and Fourier Analysis

In Some Basics of Fourier Analysis we introduced some of the basic ideas in Fourier analysis, which is ubiquitous in many parts of both pure and applied math. In this post we look at these same ideas from a different point of view, that of representation theory.

Representation theory is a way of studying group theory by turning it into linear algebra, which in many cases is more familiar to us and easier to study.

A (linear) representation is just a group homomorphism from some group G we’re interested in, to the group of linear transformations of some vector space. If the vector space has some finite dimension n, the group of its linear transformations can be expressed as the group of n \times n matrices with nonzero determinant, also known as \mathrm{GL}_n(k) (k here is the field of scalars of our vector space).

In this post, we will focus on infinite-dimensional representation theory. In other words, we will be looking at homomorphisms of a group G to the group of linear transformations of an infinite-dimensional vector space.

“Infinite-dimensional vector spaces” shouldn’t scare us – in fact many of us encounter them in basic math. Functions are examples of such. After all, vectors are merely things we can scale and add to form linear combinations. Functions satisfy that too. That being said, if we are dealing with infinity we will often need to make use of the tools of analysis. Hence functional analysis is often referred to as “infinite-dimensional linear algebra” (see also Metric, Norm, and Inner Product).

Just as a vector v has components v_i indexed by i, a function f has values f(x) indexed by x. If we are working over uncountable things, instead of summation we may use integration.

We will also focus on unitary representations in this post. This means that the linear transformations are further required to preserve a complex inner product (which takes the form of an integral) on the vector space. To facilitate this, our functions must be square-integrable.

Consider the group of real numbers \mathbb{R} (under addition). We want to use representation theory to study this group. For our purposes we want the square-integrable functions on some quotient of \mathbb{R} as our vector space. It comes with an action of \mathbb{R}, by translation. In other words, an element a of \mathbb{R} acts on our function f(x) by sending it to the new function f(x+a).

So what is this quotient of \mathbb{R} that our functions will live on? For now let us choose the integers \mathbb{Z}. The quotient \mathbb{R}/\mathbb{Z} is the circle, and functions on it are periodic functions.

To recap: We have a representation of the group \mathbb{R} (the real line under addition) as linear transformations (also called linear operators) of the vector space of square-integrable functions on the circle.

In representation theory, we will often decompose a representation into a direct sum of irreducible representations. Irreducible means it contains no “subrepresentation” on a smaller vector space. The irreducible representations are the “building blocks” of other representations, so it is quite helpful to study them.

How do we decompose our representation into irreducible representations? Consider the representation of \mathbb{R} on the vector space \mathbb{C} (the complex numbers) where a real number a acts by multiplying a complex number z by e^{2\pi i k a}, for k an integer. This representation is irreducible.

If this looks familiar, this is just the Fourier series expansion for a periodic function. So a Fourier series expansion is just an expression of the decomposition of the representation of R into irreducible representations!

What if we chose a different vector space instead? It might have been the more straightforward choice to represent \mathbb{R} via functions on \mathbb{R} itself instead of on the circle \mathbb{R}/\mathbb{Z}. That may be true, but in this case our decomposition into irreducibles is not countable! The irreducible representations into which this other representation decomposes is the one where a real number a acts on \mathbb{C} by multiplication by e^{2 \pi i k a} where k is now a real number, not necessarily an integer. So it’s not indexed by a countable set.

This should also look familiar to those who know Fourier analysis: This is the Fourier transform of a square-integrable function on \mathbb{R}.

So now we can see that concepts in Fourier analysis can also be phrased in terms of representations. Important theorems like the Plancherel theorem, for example, also may be understood as an isomorphism between the representations we gave and other representations on functions of the indices. We also have the Poisson summation in Fourier analysis. In representation theory this is an equality obtained from calculating the trace in two ways, as a sum over representations and as a sum over conjugacy classes.

Now we see how Fourier analysis is related to the infinite-dimensional representation theory of the group \mathbb{R} (one can also see this as the infinite-dimensional representation theory of the circle, i.e. the group \mathbb{R}/\mathbb{Z} – the article “Harmonic Analysis and Group Representations” by James Arthur discusses this point of view). What if we consider other groups instead, like, say, \mathrm{GL}_n(\mathbb{R}) or \mathrm{SL}_n(\mathbb{R}) (or \mathbb{R} can be replaced by other rings even)?

Things get more complicated, for example the group may not be abelian. Since we used integration so much, we also need an analogue for it. So we need to know much about group theory and analysis and everything in between for this.

These questions have been much explored for the kinds of groups called “reductive”, which are closely related to Lie groups. They include the examples of \mathrm{GL}_n(\mathbb{R}) and \mathrm{SL}_n(\mathbb{R}) earlier, as well as certain other groups we have discussed in previous posts such as the orthogonal and unitary (see also Rotations in Three Dimensions). There is a theory for these groups analogous to what I have discussed in this post, and hopefully this will be discussed more in future blog posts here.

References:

Representation theory on Wikipedia

Representation of a Lie group on Wikipedia

Fourier analysis on Wikipedia

Harmonic analysis on Wikipedia

Plancherel theorem on Wikipedia

Poisson summation formula on Wikipedia

An Introduction to the Trace Formula by James Arthur

Harmonic Analysis and Group Representations by James Arthur

Hecke Operators

A Hecke operator is a certain kind of linear transformation on the space of modular forms or cusp forms (see also Modular Forms) of a certain fixed weight k. They were originally used (and now named after) Erich Hecke, who used them to study L-functions (see also Zeta Functions and L-Functions) and in particular to determine the conditions for whether an L-series \sum_{n=1}^{\infty}a_{n}n^{-s} has an Euler product. Together with the meromorphic continuation and the functional equation, these are the important properties of the Riemann zeta function, which L-functions are supposed to be generalizations of. Hecke’s study was inspired by the work of Bernhard Riemann on the zeta function.

An example of a Hecke operator is the one commonly denoted T_{p}, for p a prime number. To understand it conceptually, we must take the view of modular forms as functions on lattices. This is equivalent to the definition of modular forms as functions on the upper half-plane, if we recall that a lattice \Lambda can also be expressed as \mathbb{Z}+\tau\mathbb{Z} where \tau is a complex number in the upper half-plane (see also The Moduli Space of Elliptic Curves).

In this view a modular form is a function on the space of lattices on \mathbb{C} such that

  • f(\mathbb{Z}+\tau\mathbb{Z}) is holomorphic as a function on the upper half-plane
  • f(\mathbb{Z}+\tau\mathbb{Z}) is bounded as \tau goes to i\infty
  • f(\mu\Lambda)=\mu^{-k}f(\Lambda) for some nonzero complex number \mu, and k is the weight of the modular form 

Now we define the Hecke operator T_{p} by what it does to a modular form f(\Lambda) of weight k as follows:

\displaystyle T_{p}f(\Lambda)=p^{k-1}\sum_{\Lambda'\subset \Lambda}f(\Lambda')

where \Lambda' runs over the sublattices of \Lambda of index p. In other words, applying T_{p} to a modular form gives back a modular form whose value on a lattice \Lambda is the sum of the values of the original modular form on the sublattices of \Lambda  of index p, times some factor that depends on the Hecke operator and the weight of the modular form.

Hecke operators are also often defined via their effect on the Fourier expansion of a modular form. Let f(\tau) be a modular form of weight k whose Fourier expansion is given by \sum_{n=0}^{\infty}a_{i}q^{i}, where we have adopted the convention q=e^{2\pi i \tau} which is common in the theory of modular forms (hence this Fourier expansion is also known as a q-expansion). Then the effect of a Hecke operator T_{p} is as follows:

\displaystyle T_{p}f(\tau)=\sum_{n=0}^{\infty}(a_{pn}+p^{k-1}a_{n/p})q^{n}

where a_{n/p}=0 when p does not divide n. To see why this follows from our first definition of the Hecke operator, we note that if our lattice is given by \mathbb{Z}+\tau\mathbb{Z}, there are p+1 sublattices of index p: There are p of these sublattices given by p\mathbb{Z}+(j+\tau)\mathbb{Z} for j ranging from 0 to p-1, and another one given by \mathbb{Z}+(p\tau)\mathbb{Z}. Let us split up the Hecke operators as follows:

\displaystyle T_{p}f(\mathbb{Z}+\tau\mathbb{Z})=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z})+p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z})=\Sigma_{1}+\Sigma_{2}

where \Sigma_{1}=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z}) and \Sigma_{2}=p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z}). Let us focus on the former first. We have

\displaystyle \Sigma_{1}=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z})

But applying the third property of modular forms above, namely that f(\mu\Lambda)=\mu^{-k}f(\Lambda) with \mu=p, we have

\displaystyle \Sigma_{1}=p^{-1}\sum_{j=0}^{p-1}f(\mathbb{Z}+((j+\tau)/p)\mathbb{Z})

Now our argument inside the modular forms being summed are in the usual way we write them, except that instead of \tau we have ((j+\tau)/p), so we expand them as a Fourier series

\displaystyle \Sigma_{1}=p^{-1}\sum_{j=0}^{p-1}\sum_{n=0}^{\infty}a_{n}e^{2\pi i n((j+\tau)/p)}

We can switch the summations since one of them is finite

\displaystyle \Sigma_{1}=p^{-1}\sum_{n=0}^{\infty}\sum_{j=0}^{p-1}a_{n}e^{2\pi i n((j+\tau)/p)}

The inner sum over j is zero unless p divides n, in which case the sum is equal to p. This gives us

\displaystyle \Sigma_{1}=p^{-1}\sum_{n=0}^{\infty}a_{pn}q^{n}

where again q=e^{2\pi i \tau}. Now consider \Sigma_{2}. We have

\displaystyle \Sigma_{2}=p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z})

Expanding the right hand side into a Fourier series, we have

\displaystyle \Sigma_{2}=p^{k-1}\sum_{n}a_{n}e^{2\pi i n p\tau}

Reindexing, we have

\displaystyle \Sigma_{2}=p^{k-1}\sum_{n}a_{n/p}q^{n}

and adding together \Sigma_{1} and \Sigma_{2} gives us our result.

The Hecke operators can be defined not only for prime numbers, but for all natural numbers, and any two Hecke operators T_{m} and T_{n} commute with each other. They preserve the weight of a modular form, and take cusp forms to cusp forms (this can be seen via their effect on the Fourier series). We can also define Hecke operators for modular forms with level structure, but it is more complicated and has some subtleties when for the Hecke operator T_{n} we have n sharing a common factor with the level.

If a cusp form f is an eigenvector for a Hecke operator T_{n}, and it is normalized, i.e. its Fourier coefficient a_{1} is equal to 1, then the corresponding eigenvalue of the Hecke operator T_{n} on f is precisely the Fourier coefficient a_{n}.

Now the Hecke operators satisfy the following multiplicativity properties:

  • T_{m}T_{n}=T_{mn} for m and n mutually prime
  • T_{p^{n}}T_{p}=T_{p^{n+1}}+p^{k-1}T_{p} for p prime

Suppose we have an L-series \sum_{n}a_{n}n^{-s}. This L-series will have an Euler product if and only if the coefficients a_{n} satisfy the following:

  • a_{m}a_{n}=a_{mn} for m and n mutually prime
  • a_{p^{n}}T_{p}=a_{p^{n+1}}+p^{k-1}a_{p} for p prime

Given that the Fourier coefficients of a normalized Hecke eigenform (a normalized cusp form that is a simultaneous eigenvector for all the Hecke operators) are the eigenvalues of the Hecke operators, we see that the L-series of a normalized Hecke eigenform has an Euler product.

In addition to the Hecke operators T_{n}, there are also other closely related operators such as the diamond operator \langle n\rangle and another operator denoted U_{p}. These and more on Hecke operators, such as other ways to define them with double coset operators or Hecke correspondences will hopefully be discussed in future posts.

References:

Hecke Operator on Wikipedia

Modular Forms by Andrew Snowden

Congruences between Modular Forms by Frank Calegari

A First Course in Modular Forms by Fred Diamond and Jerry Shurman

Advanced Topics in the Arithmetic of Elliptic Curves by Joseph H. Silverman

Some Basics of Fourier Analysis

Why do we study sine and cosine waves so much? Most waves, like most water waves and most sound waves, do not resemble sine and cosine waves at all (we will henceforth refer to sine and cosine waves as sinusoidal waves).

Well, it turns out that while most waves are not sinusoidal waves, all of them are actually combinations of sinusoidal waves of different sizes and frequencies. Hence we can understand much about essentially any wave simply by studying sinusoidal waves. This idea that any wave is a combination of multiple sinusoidal waves is part of the branch of mathematics called Fourier analysis.

Here’s a suggestion for an experiment from the book Vibrations and Waves by A.P. French: If you speak into the strings of piano (I believe one of the pedals have to be held down first) the strings will vibrate, and since each string corresponds to a sine wave of a certain frequency, it will give you the breakdown of the sine wave components that make up your voice. If a string vibrates more strongly more than others it means there’s a bigger part of that in your voice, i.e. that sine wave component has a bigger amplitude.

More technically, we can express these concepts in the following manner. Let f(x) be a function that is integrable over some interval from x_{0} to x_{0}+P (for a wave, we can take P to be the “period” over which the wave repeats itself). Then over this interval the function can be expressed as the sum of sine and cosine waves of different sizes and frequencies, as follows:

\displaystyle f(x)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}\bigg(a_{n}\text{cos}\bigg(\frac{2\pi nx}{P}\bigg)+b_{n}\text{sin}\bigg(\frac{2\pi nx}{P}\bigg)\bigg)

This expression is called the Fourier series expansion of the function f(x). The coefficient \frac{a_{0}}{2} is the “level” around which the waves oscillate; the other coefficients a_{n} and b_{n} refer to the amplitude, or the “size” of the respective waves whose frequencies are equal to n. Of course, the bigger the frequency, the “faster” these waves oscillate.

Now given a function f(x) that satisfies the condition given earlier, how do we know what sine and cosine waves make it up? For this we must know what the coefficients a_{n} and b_{n} are.

In order to solve for a_{n} and b_{n}, we will make use of the property of the sine and cosine functions called orthogonality (the rest of the post will make heavy use of the language of calculus, therefore the reader might want to look at An Intuitive Introduction to Calculus):

\displaystyle \frac{1}{\pi}\int_{x_{0}}^{x_{0}+2\pi}\text{cos}(mx)\text{cos}(nx)dx=0    if m\neq n

\displaystyle \frac{1}{\pi}\int_{x_{0}}^{x_{0}+2\pi}\text{cos}(mx)\text{cos}(nx)dx=1    if m=n

\displaystyle \frac{1}{\pi}\int_{x_{0}}^{x_{0}+2\pi}\text{sin}(mx)\text{sin}(nx)dx=0    if m\neq n

\displaystyle \frac{1}{\pi}\int_{x_{0}}^{x_{0}+2\pi}\text{sin}(mx)\text{sin}( nx)dx=1    if m=n

\displaystyle \frac{1}{\pi}\int_{x_{0}}^{x_{0}+2\pi}\text{cos}(mx)\text{sin}(nx)dx=0    for all m,n

What this means is that when a sine or cosine function is not properly “paired” then its integral over an interval equal to its period will always be zero. It will only give a nonzero value if it is properly paired, and we can “rescale” this value to make it equal to 1.

Now we can look at the following expression:

\displaystyle \frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{2\pi x}{P})dx

Knowing that the function f(x) has a Fourier series expansion as above, we now have

\displaystyle \frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{2\pi x}{P})dx=\frac{2}{P}\int_{x_{0}}^{x_{0}+P}\bigg(\frac{a_{0}}{2}+\sum_{n=1}^{\infty}\bigg(a_{n}\text{cos}\bigg(\frac{2\pi nx}{P}\bigg)+b_{n}\text{sin}\bigg(\frac{2\pi nx}{P}\bigg)\bigg)\text{cos}(\frac{2\pi x}{P})dx.

But we know that integrals involving the cosine function will always be zero unless it is properly paired; therefore it will be zero for all terms of the infinite series except for one, in which case it will yield (the constants are all there to properly scale the result)

\displaystyle \frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{2\pi x}{P})dx=\frac{2}{P}\int_{x_{0}}^{x_{0}+P}\bigg(a_{1}\text{cos}\bigg(\frac{2\pi x}{P}\bigg)\text{cos}(\frac{2\pi x}{P})dx

\displaystyle \frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{2\pi x}{P})dx=a_{1}.

We have therefore used the orthogonality property of the cosine function to “filter” a single frequency component out of the many that make up our function.

Next we might use \text{cos}(\frac{4\pi x}{P}) instead of \text{cos}(\frac{2\pi x}{P}). This will give us

\displaystyle \frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{4\pi x}{P})dx=a_{2}.

We can continue to the procedure to solve for the coefficients a_{3}, a_{4}, and so on, and we can replace the cosine function by the sine function to solve for the coefficients b_{1}, b_{2}, and so on. Of course, the coefficient a_{0} can also be obtained by using \text{cos}(0)=1.

In summary, we can solve for the coefficients using the following formulas:

\displaystyle a_{n}=\frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{cos}(\frac{2\pi nx}{P})dx

\displaystyle b_{n}=\frac{2}{P}\int_{x_{0}}^{x_{0}+P}f(x)\text{sin}(\frac{2\pi nx}{P})dx

Now that we have shown how a function can be “broken down” or “decomposed” into a (possibly infinite) sum of sine and cosine waves of different amplitudes and frequencies, we now revisit the relationship between the sine and cosine functions and the exponential function (see “The Most Important Function in Mathematics”) in order to give us yet another expression for the Fourier series. We recall that, combining the concepts of the exponential function and complex numbers we have the beautiful and important equation

\displaystyle e^{ix}=\text{cos}(x)+i\text{sin}(x)

which can also be expressed in the following forms:

\displaystyle \text{cos}(x)=\frac{e^{ix}+e^{-ix}}{2}

\displaystyle \text{sin}(x)=\frac{e^{ix}-e^{-ix}}{2i}.

Using these expressions, we can rewrite the Fourier series of a function in a more “shorthand” form:

\displaystyle f(x)=\sum_{n=-\infty}^{\infty}c_{n}e^{\frac{2\pi i nx}{P}}

where

\displaystyle c_{n}=\frac{1}{P}\int_{x_{0}}^{x_{0}+P}f(x)e^{-\frac{2\pi i nx}{P}}dx.

Finally, we discuss more concepts related to the process we used in solving for the coefficients a_{n}, b_{n}, and c_{n}. As we have already discussed, these coefficients express “how much” of the waves with frequency equal to n are in the function f(x). We can now abstract this idea to define the Fourier transform \hat{f}(k) of a function f(x) as follows:

\displaystyle \hat{f}(k)=\int_{-\infty}^{\infty}f(x)e^{-2\pi i kx}dx

There are of course versions of the Fourier transform that use the sine and cosine functions instead of the exponential function, but the form written above is more common in the literature. Roughly, the Fourier transform \hat{f}(k) also expresses “how much” of the waves with frequency equal to k are in the function f(x). The difference lies in the interval over which we are integrating; however, we may consider the formula for obtaining the coefficients of the Fourier series as taking the Fourier transform of a single cycle of a periodic function, with its value set to 0 outside of the interval occupied by the cycle, and with variables appropriately rescaled.

The Fourier transform has an “inverse”, which allows us to recover f(x) from \hat{f}(k):

\displaystyle f(x)=\int_{-\infty}^{\infty}\hat{f}(k)e^{2\pi i kx}dk.

Fourier analysis, aside from being an interesting subject in itself, has many applications not only in other branches of mathematics and also in the natural sciences and in engineering. For example, in physics, the Heisenberg uncertainty principle of quantum mechanics (see More Quantum Mechanics: Wavefunctions and Operators) comes from the result in Fourier analysis that the more a function is “localized” around a small area, the more its Fourier transform will be spread out over all of space, and vice-versa. Since the probability amplitudes for the position and the momentum are related to each other as the Fourier transform and inverse Fourier transform of each other (a result of the de Broglie relations), this manifests in the famous principle that the more we know about the position, the less we know about the momentum, and vice-versa.

Fourier analysis can even be used to explain the distinctive “distorted” sound of electric guitars in rock and heavy metal music. Usually, plucking a guitar string produces a sound wave which is sinusoidal. For electric guitars, the sound is amplified using transistors; however, there is a limit to how much amplification can be done, and at a certain point (technically, this is when the transistor is operating outside of the “linear region”), the sound wave looks like a sine function with its peaks and troughs “clipped”. In Fourier analysis this corresponds to an addition of higher-frequency components, and this results in the distinctive sound of that genre of music.

Yet another application of Fourier analysis, and in fact its original application, is the study of differential equations. The mathematician Joseph Fourier, after whom Fourier analysis is named, developed the techniques we have discussed in this post in order to study the differential equation expressing the flow of heat in a material. It so happens that difficult calculations, for example differentiation, involving a function correspond to easier ones, such as simple multiplication, involving its Fourier transform. Therefore it is a common technique to convert a difficult problem to a simple one using the Fourier transform, and after the problem has been solved, we use the inverse Fourier transform to get the solution to the original problem.

Despite the crude simplifications we have assumed in order to discuss Fourier analysis in this post, the reader should know that it remains a deep and interesting subject in modern mathematics. A more general and more advanced form of the subject is called harmonic analysis, and it is one of the areas where there is much research, both on its own, and in connection to other subjects.

References:

Fourier Analysis on Wikipedia

Fourier Series on Wikipedia

Fourier Transform on Wikipedia

Harmonic Analysis on Wikipedia

Vibrations and Waves by A.P. French

Fourier Analysis: An Introduction by Elias M. Stein and Rami Shakarchi