Hecke Operators

A Hecke operator is a certain kind of linear transformation on the space of modular forms or cusp forms (see also Modular Forms) of a certain fixed weight k. They were originally used (and now named after) Erich Hecke, who used them to study L-functions (see also Zeta Functions and L-Functions) and in particular to determine the conditions for whether an L-series \sum_{n=1}^{\infty}a_{n}n^{-s} has an Euler product. Together with the meromorphic continuation and the functional equation, these are the important properties of the Riemann zeta function, which L-functions are supposed to be generalizations of. Hecke’s study was inspired by the work of Bernhard Riemann on the zeta function.

An example of a Hecke operator is the one commonly denoted T_{p}, for p a prime number. To understand it conceptually, we must take the view of modular forms as functions on lattices. This is equivalent to the definition of modular forms as functions on the upper half-plane, if we recall that a lattice \Lambda can also be expressed as \mathbb{Z}+\tau\mathbb{Z} where \tau is a complex number in the upper half-plane (see also The Moduli Space of Elliptic Curves).

In this view a modular form is a function on the space of lattices on \mathbb{C} such that

  • f(\mathbb{Z}+\tau\mathbb{Z}) is holomorphic as a function on the upper half-plane
  • f(\mathbb{Z}+\tau\mathbb{Z}) is bounded as \tau goes to i\infty
  • f(\mu\Lambda)=\mu^{-k}f(\Lambda) for some nonzero complex number \mu, and k is the weight of the modular form 

Now we define the Hecke operator T_{p} by what it does to a modular form f(\Lambda) of weight k as follows:

\displaystyle T_{p}f(\Lambda)=p^{k-1}\sum_{\Lambda'\subset \Lambda}f(\Lambda')

where \Lambda' runs over the sublattices of \Lambda of index p. In other words, applying T_{p} to a modular form gives back a modular form whose value on a lattice \Lambda is the sum of the values of the original modular form on the sublattices of \Lambda  of index p, times some factor that depends on the Hecke operator and the weight of the modular form.

Hecke operators are also often defined via their effect on the Fourier expansion of a modular form. Let f(\tau) be a modular form of weight k whose Fourier expansion is given by \sum_{n=0}^{\infty}a_{i}q^{i}, where we have adopted the convention q=e^{2\pi i \tau} which is common in the theory of modular forms (hence this Fourier expansion is also known as a q-expansion). Then the effect of a Hecke operator T_{p} is as follows:

\displaystyle T_{p}f(\tau)=\sum_{n=0}^{\infty}(a_{pn}+p^{k-1}a_{n/p})q^{n}

where a_{n/p}=0 when p does not divide n. To see why this follows from our first definition of the Hecke operator, we note that if our lattice is given by \mathbb{Z}+\tau\mathbb{Z}, there are p+1 sublattices of index p: There are p of these sublattices given by p\mathbb{Z}+(j+\tau)\mathbb{Z} for j ranging from 0 to p-1, and another one given by \mathbb{Z}+(p\tau)\mathbb{Z}. Let us split up the Hecke operators as follows:

\displaystyle T_{p}f(\mathbb{Z}+\tau\mathbb{Z})=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z})+p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z})=\Sigma_{1}+\Sigma_{2}

where \Sigma_{1}=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z}) and \Sigma_{2}=p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z}). Let us focus on the former first. We have

\displaystyle \Sigma_{1}=p^{k-1}\sum_{j=0}^{p-1}f(p\mathbb{Z}+(j+\tau)\mathbb{Z})

But applying the third property of modular forms above, namely that f(\mu\Lambda)=\mu^{-k}f(\Lambda) with \mu=p, we have

\displaystyle \Sigma_{1}=p^{-1}\sum_{j=0}^{p-1}f(\mathbb{Z}+((j+\tau)/p)\mathbb{Z})

Now our argument inside the modular forms being summed are in the usual way we write them, except that instead of \tau we have ((j+\tau)/p), so we expand them as a Fourier series

\displaystyle \Sigma_{1}=p^{-1}\sum_{j=0}^{p-1}\sum_{n=0}^{\infty}a_{n}e^{2\pi i n((j+\tau)/p)}

We can switch the summations since one of them is finite

\displaystyle \Sigma_{1}=p^{-1}\sum_{n=0}^{\infty}\sum_{j=0}^{p-1}a_{n}e^{2\pi i n((j+\tau)/p)}

The inner sum over j is zero unless p divides n, in which case the sum is equal to p. This gives us

\displaystyle \Sigma_{1}=p^{-1}\sum_{n=0}^{\infty}a_{pn}q^{n}

where again q=e^{2\pi i \tau}. Now consider \Sigma_{2}. We have

\displaystyle \Sigma_{2}=p^{k-1}f(\mathbb{Z}+p\tau\mathbb{Z})

Expanding the right hand side into a Fourier series, we have

\displaystyle \Sigma_{2}=p^{k-1}\sum_{n}a_{n}e^{2\pi i n p\tau}

Reindexing, we have

\displaystyle \Sigma_{2}=p^{k-1}\sum_{n}a_{n/p}q^{n}

and adding together \Sigma_{1} and \Sigma_{2} gives us our result.

The Hecke operators can be defined not only for prime numbers, but for all natural numbers, and any two Hecke operators T_{m} and T_{n} commute with each other. They preserve the weight of a modular form, and take cusp forms to cusp forms (this can be seen via their effect on the Fourier series). We can also define Hecke operators for modular forms with level structure, but it is more complicated and has some subtleties when for the Hecke operator T_{n} we have n sharing a common factor with the level.

If a cusp form f is an eigenvector for a Hecke operator T_{n}, and it is normalized, i.e. its Fourier coefficient a_{1} is equal to 1, then the corresponding eigenvalue of the Hecke operator T_{n} on f is precisely the Fourier coefficient a_{n}.

Now the Hecke operators satisfy the following multiplicativity properties:

  • T_{m}T_{n}=T_{mn} for m and n mutually prime
  • T_{p^{n}}T_{p}=T_{p^{n+1}}+p^{k-1}T_{p} for p prime

Suppose we have an L-series \sum_{n}a_{n}n^{-s}. This L-series will have an Euler product if and only if the coefficients a_{n} satisfy the following:

  • a_{m}a_{n}=a_{mn} for m and n mutually prime
  • a_{p^{n}}T_{p}=a_{p^{n+1}}+p^{k-1}a_{p} for p prime

Given that the Fourier coefficients of a normalized Hecke eigenform (a normalized cusp form that is a simultaneous eigenvector for all the Hecke operators) are the eigenvalues of the Hecke operators, we see that the L-series of a normalized Hecke eigenform has an Euler product.

In addition to the Hecke operators T_{n}, there are also other closely related operators such as the diamond operator \langle n\rangle and another operator denoted U_{p}. These and more on Hecke operators, such as other ways to define them with double coset operators or Hecke correspondences will hopefully be discussed in future posts.

References:

Hecke Operator on Wikipedia

Modular Forms by Andrew Snowden

Congruences between Modular Forms by Frank Calegari

A First Course in Modular Forms by Fred Diamond and Jerry Shurman

Advanced Topics in the Arithmetic of Elliptic Curves by Joseph H. Silverman

Metric, Norm, and Inner Product

In Vector Spaces, Modules, and Linear Algebra, we defined vector spaces as sets closed under addition and scalar multiplication (in this case the scalars are the elements of a field; if they are elements of a ring which is not a field, we have not a vector space but a module). We have seen since then that the study of vector spaces, linear algebra, is very useful, interesting, and ubiquitous in mathematics.

In this post we discuss vector spaces with some more additional structure – which will give them a topology (Basics of Topology and Continuous Functions), giving rise to topological vector spaces. This also leads to the branch of mathematics called functional analysis, which has applications to subjects such as quantum mechanics, aside from being an interesting subject in itself. Two of the important objects of study in functional analysis that we will introduce by the end of this post are Banach spaces and Hilbert spaces.

I. Metric

We start with the concept of a metric. We have to get two things out of the way. First, this is not the same as the metric tensor in differential geometry, although it also gives us a notion of a “distance”. Second, the concept of metric is not limited to vector spaces only, unlike the other two concepts we will discuss in this post. It is actually something that we can put on a set to define a topology, called the metric topology.

As we discussed in Basics of Topology and Continuous Functions, we may think of a topology as an “arrangement”. The notion of “distance” provided by the metric gives us an intuitive such arrangement. We will make this concrete shortly, but first we give the technical definition of the metric. We quote from the book Topology by James R. Munkres:

A metric on a set X is a function

\displaystyle d: X\times X\rightarrow \mathbb{R}

having the following properties:

1) d(x, y)>0 for all x,y \in X; equality holds if and only if x=y.

2) d(x,y)=d(y,x) for all x,y \in X.

3) (Triangle inequality) d(x,y)+d(y,z)>d(x,z), for all x,y,z \in X.

We quote from the same book another important definition:

Given a metric d on X, the number d(x, y) is often called the distance between x and y in the metric d. Given \epsilon >0, consider the set

\displaystyle B_{d}(x,\epsilon)=\{y|d(x,y)<\epsilon\}

of all points у whose distance from x is less than \epsilon. It is called the \epsilon-ball centered at x. Sometimes we omit the metric d from the notation and write this ball simply as B(x,\epsilon) when no confusion will arise.

Finally, once more from the same book, we have the definition of the metric topology:

If d is a metric on the set X, then the collection of all \epsilon-balls B_{d}(x,\epsilon), for x\in X and \epsilon>0, is a basis for a topology on X, called the metric topology induced by d.

We recall that the basis of a topology is a collection of open sets such that every other open set can be described as a union of the elements of this collection. A set with a specific metric that makes it into a topological space with the metric topology is called a metric space.

An example of a metric on the set \mathbb{R}^{n} is given by the ordinary “distance formula”:

\displaystyle d(x,y)=\sqrt{\sum_{i=1}^{n}(x_{i}-y_{i})^{2}}

Note: We have followed the notation of the book of Munkres, which may be different from the usual notation. Here x and y are two different points on \mathbb{R}^{n}, and x_{i} and y_{i} are their respective coordinates.

The above metric is not the only one possible however. There are many others. For instance, we may simply put

\displaystyle d(x,y)=0 if \displaystyle x=y

\displaystyle d(x,y)=1 if \displaystyle x\neq y.

This is called the discrete metric, and one may check that it satisfies the definition of a metric. One may think of it as something that simply specifies the distance from a point to itself as “near”, and the distance to any other point that is not itself as “far”. There is also the taxicab metric, given by the following formula:

\displaystyle d(x,y)=\sum_{i=1}^{n}|x_{i}-y_{i}|

One way to think of the taxicab metric, which reflects the origins of the name, is that it is the “distance” important to taxi drivers (needed to calculate the fare) in a certain city with perpendicular roads. The ordinary distance formula is not very helpful since one needs to stay on the roads – therefore, for example, if one needs to go from point x to point y which are on opposite corners of a square, the distance traversed is not equal to the length of the diagonal, but is instead equal to the length of two sides. Again, one may check that the taxicab metric satisfies the definition of a metric.

II. Norm

Now we move on to vector spaces (we will consider in this post only vector spaces over the real or complex numbers), and some mathematical concepts that we can associate with them, as suggested in the beginning of this post. Being a set closed under addition and scalar multiplication is already a useful concept, as we have seen, but we can still add on some ideas that would make them even more interesting. The notion of metric that we have discussed earlier will show up repeatedly over this discussion.

We first discuss the notion of a norm, which gives us a notion of a “magnitude” of a vector. We quote from the book Introductory Functional Analysis with Applications by Erwin Kreyszig for the definition:

A norm on a (real or complex) vector space X is a real valued function on X whose value at an x\in X is denoted by

\displaystyle \|x\|    (read “norm of x“)

and which has the properties

(N1) \|x\|\geq 0

(N2) \|x\|=0\iff x=0

(N3) \|\alpha x\|=|\alpha|\|x\|

(N4) \|x+y\|\leq\|x\|+\|y\|    (triangle inequality)

here x and y are arbitrary vectors in X and \alpha is any scalar.

A vector space with a specified norm is called a normed space.

A norm automatically provides a vector space with a metric; in other words, a normed space is always a metric space. The metric is given in terms of the norm by the following equation:

\displaystyle d(x,y)=\|x-y\|

However, not all metrics come from a norm. An example is the discrete metric, which satisfies the properties of the metric but not the norm.

III. Inner Product

Next we discuss the inner product. The inner product gives us a notion of “orthogonality”, a concept which we already saw in action in Some Basics of Fourier Analysis. Intuitively, when two vectors are “orthogonal”, they are “perpendicular” in some sense. However, our geometric intuition may not be as useful when we are discussing, say, the infinite-dimensional vector space whose elements are functions. For this we need a more abstract notion of orthogonality, which is embodied by the inner product. Again, for the technical definition we quote from the book of Kreyszig:

With every pair of vectors x and y there is associated a scalar which is written

\displaystyle \langle x,y\rangle

and is called the inner product of x and y, such that for all vectors x, y, z and scalars \alpha we have

(IPl) \langle x+y,z\rangle=\langle x,z\rangle+\langle y,z\rangle

(IP2) \langle \alpha x,y\rangle=\alpha\langle x,y\rangle

(IP3) \langle x,y\rangle=\overline{\langle y,x\rangle}

(IP4) \langle x,x\rangle\geq 0,    \langle x,x\rangle=0 \iff x=0

A vector space with a specified inner product is called an inner product space.

One of the most basic examples, in the case of a finite-dimensional vector space, is given by the following procedure. Let x and y be elements (vectors) of some n-dimensional real vector space X, with respective components x_{1}, x_{2},...,x_{n} and y_{1},y_{2},...,y_{n} in some basis. Then we can set

\displaystyle \langle x,y\rangle=x_{1}y_{1}+x_{2}y_{2}+...+x_{n}y_{n}

This is the familiar “dot product” taught in introductory university-level mathematics courses.

Let us now see how the inner product gives us a notion of “orthogonality”. To make things even easier to visualize, let us set n=2, so that we are dealing with vectors (which we can now think of as quantities with magnitude and direction) in the plane. A unit vector x pointing “east” has components x_{1}=1, x_{2}=0, while a unit vector y pointing “north” has components y_{1}=0, y_{2}=1. These two vectors are perpendicular, or orthogonal. Computing the inner product we discussed earlier, we have

\displaystyle \langle x,y\rangle=(1)(0)+(0)(1)=0.

We say, therefore, that two vectors are orthogonal when their inner product is zero. As we have mentioned earlier, we can extend this to cases where our geometric intuition may no longer be as useful to us. For example, consider the infinite dimensional vector space of (real-valued) functions which are “square integrable” over some interval (if we square them and integrate over this interval, we have a finite answer), say [0,1]. We set our inner product to be

\displaystyle \int_{0}^{1}f(x)g(x)dx.

As an example, let f(x)=\text{cos}(2\pi x) and g(x)=\text{sin}(2\pi x). We say that these functions are “orthogonal”, but it is hard to imagine in what way. But if we take the inner product, we will see that

\displaystyle \int_{0}^{1}\text{cos}(2\pi x)\text{sin}(2\pi x)dx=0.

Hence we see that \text{cos}(2\pi x) and \text{sin}(2\pi x) are orthogonal. Similarly, we have

\displaystyle \int_{0}^{1}\text{cos}(2\pi x)\text{cos}(4\pi x)dx=0

and \text{cos}(2\pi x) and \text{cos}(4\pi x) are also orthogonal. We have discussed this in more detail in Some Basics of Fourier Analysis. We have also seen in that post that orthogonality plays a big role in the subject of Fourier analysis.

Just as a norm always induces a metric, an inner product also induces a norm, and by extension also a metric. In other words, an inner product space is also a normed space, and also a metric space. The norm is given in terms of the inner product by the following expression:

\displaystyle \|x\|=\sqrt{\langle x,x\rangle}

Just as with the norm and the metric, although an inner product always induces a norm, not every norm is induced by an inner product.

IV. Banach Spaces and Hilbert Spaces

There is one more concept I want to discuss in this post. In Valuations and Completions, we discussed Cauchy sequences and completions. Those concepts still carry on here, because they are actually part of the study of metric spaces (in fact, the valuations discussed in that post actually serve as a metric on the fields that were discussed, showing how in number theory the concept of metric and metric spaces still make an appearance). If every Cauchy sequence in a metric space X converges to an element in X, then we say that X is a complete metric space.

Since normed spaces and inner product spaces are also metric spaces, the notion of a complete metric space still makes sense, and we have special names for them. A normed space which is also a complete metric space is called a Banach space, while an inner product space which is also a complete metric space is called a Hilbert space. Finite-dimensional vector spaces (over the real or complex numbers) are always complete, and therefore we only really need the distinction when we are dealing with infinite dimensional vector spaces.

Banach spaces and Hilbert spaces are important in quantum mechanics. We recall in Some Basics of Quantum Mechanics that the possible states of a system in quantum mechanics form a vector space. However, more is true – they actually form a Hilbert space, and the states that we can observe “classically” are orthogonal to each other. The Dirac “bra-ket” notation that we have discussed makes use of the inner product to express probabilities.

Meanwhile, Banach spaces often arise when studying operators, which correspond to observables such as position and momentum. Of course the states form Banach spaces too, since all Hilbert spaces are Banach spaces, but there is much motivation to study the Banach spaces formed by the operators as well instead of just that formed by the states. This is an important aspect of the more mathematically involved treatments of quantum mechanics.

References:

Topological Vector Space on Wikipedia

Functional Analysis on Wikipedia

Metric on Wikipedia

Norm on Wikipedia

Inner Product Space on Wikipedia

Complete Metric Space on Wikipedia

Banach Space on Wikipedia

Hilbert Space on Wikipedia

A Functional Analysis Primer on Bahcemizi Yetistermeliyiz

Topology by James R. Munkres

Introductory Functional Analysis with Applications by Erwin Kreyszig

Real Analysis by Halsey Royden