Some Useful Links: Knots in Physics and Number Theory

In modern times, “knots” have been important objects of study in mathematics. These “knots” are akin to the ones we encounter in ordinary life, except that they don’t have loose ends. For a better idea of what I mean, consider the following picture of what is known as a “trefoil knot“:

250px-TrefoilKnot_01.svg

More technically, a knot is defined as the embedding of a circle in 3-dimensional space. For more details on the theory of knots, the reader is referred to the following Wikipedia pages:

Knot on Wikipedia

Knot Theory on Wikipedia

One of the reasons why knots have become such a major part of modern mathematical research is because of the work of mathematical physicists such as Edward Witten, who has related them to the Feynman path integral in quantum mechanics (see Lagrangians and Hamiltonians).

Witten, who is very famous for his work on string theory (see An Intuitive Introduction to String Theory and (Homological) Mirror Symmetry) and for being the first, and so far only, physicist to win the prestigious Fields medal, himself explains the relationship between knot theory and quantum mechanics in the following article:

Knots and Quantum Theory by Edward Witten

But knots have also appeared in other branches of mathematics. For example, in number theory, the result in etale cohomology known as Artin-Verdier duality states that the integers are similar to a 3-dimensional object in some sense. In particular, because it has a trivial etale fundamental group (which is kind of an algebraic analogue of the fundamental group discussed in Homotopy Theory and Covering Spaces), it is similar to a 3-sphere (recall the common but somewhat confusing notation that the ordinary sphere we encounter in everyday life is called the 2-sphere, while a circle is also called the 1-sphere).

Note: The fact that a closed 3-dimensional space with a trivial fundamental group is a 3-sphere is the content of a very famous conjecture known as the Poincare conjecture, proved by Grigori Perelman in the early 2000’s.  Perelman refused the million-dollar prize that was supposed to be his reward, as well as the Fields medal.

The prime numbers, because their associated finite fields have one cover for every integer, are like circles, and recalling the definition of knots mentioned above, are therefore like knots on this 3-sphere. This analogy, originally developed by David Mumford and Barry Mazur, is better explained in the following post by Lieven le Bruyn on his blog neverendingbooks:

What is the Knot Associated to a Prime on neverendingbooks

Finally, given what we have discussed, could it be that knot theory can “tie together” (pun intended) physics and number theory? This is the motivation behind the new subject called “arithmetic Chern-Simons theory” which is introduced in the following paper by Minhyong Kim:

Arithmetic Chern-Simons Theory I by Minhyong Kim

Of course, it must also be clarified that this is not the only way by which physics and number theory are related. It is merely another way, a new and not yet thoroughly explored one, by which the unity of mathematics manifests itself via its many different branches helping one another.

Advertisement

Some Useful Links: Quantum Gravity Seminar by John Baez

I have not been able to make posts tackling physics in a while, since I have lately been focusing my efforts on some purely mathematical stuff which I’m trying very hard to understand. Hence my last few posts have been quite focused mostly on algebraic geometry and category theory. Such might perhaps be the trend in the coming days, although of course I still want to make more posts on physics at some point.

Of course, the “purely mathematical” stuff I’ve been posting about is still very much related to physics. For instance, in this post I’m going to link to a webpage collecting notes from seminars by mathematical physicist John Baez on the subject of quantum gravity – and much of it involves concepts from subjects like category theory and algebraic topology (for more on the basics of these subjects from this blog, see Category TheoryHomotopy Theory, and Homology and Cohomology).

Here’s the link:

Seminar by John Baez

As Baez himself says on the page, however, quantum gravity is not the only subject tackled on his seminars. Other subjects include topological quantum field theory, quantization, and gauge theory, among many others.

John Baez also has lots of other useful stuff on his website. One of the earliest mathematics and mathematical physics blogs on the internet is This Week’s Finds in Mathematical Physics, which apparently goes back all the way to 1995, and is one of the inspirations for this blog:

This Week’s Finds in Mathematical Physics by John Baez

Many of the posts on This Week’s Finds in Mathematical Physics show the countless fruitful, productive, and beautiful interactions between mathematics and physics. This is also one of the main goals of this blog – reflected even by the posts which have been focused on mostly “purely mathematical” stuff.

Metric, Norm, and Inner Product

In Vector Spaces, Modules, and Linear Algebra, we defined vector spaces as sets closed under addition and scalar multiplication (in this case the scalars are the elements of a field; if they are elements of a ring which is not a field, we have not a vector space but a module). We have seen since then that the study of vector spaces, linear algebra, is very useful, interesting, and ubiquitous in mathematics.

In this post we discuss vector spaces with some more additional structure – which will give them a topology (Basics of Topology and Continuous Functions), giving rise to topological vector spaces. This also leads to the branch of mathematics called functional analysis, which has applications to subjects such as quantum mechanics, aside from being an interesting subject in itself. Two of the important objects of study in functional analysis that we will introduce by the end of this post are Banach spaces and Hilbert spaces.

I. Metric

We start with the concept of a metric. We have to get two things out of the way. First, this is not the same as the metric tensor in differential geometry, although it also gives us a notion of a “distance”. Second, the concept of metric is not limited to vector spaces only, unlike the other two concepts we will discuss in this post. It is actually something that we can put on a set to define a topology, called the metric topology.

As we discussed in Basics of Topology and Continuous Functions, we may think of a topology as an “arrangement”. The notion of “distance” provided by the metric gives us an intuitive such arrangement. We will make this concrete shortly, but first we give the technical definition of the metric. We quote from the book Topology by James R. Munkres:

A metric on a set X is a function

\displaystyle d: X\times X\rightarrow \mathbb{R}

having the following properties:

1) d(x, y)>0 for all x,y \in X; equality holds if and only if x=y.

2) d(x,y)=d(y,x) for all x,y \in X.

3) (Triangle inequality) d(x,y)+d(y,z)>d(x,z), for all x,y,z \in X.

We quote from the same book another important definition:

Given a metric d on X, the number d(x, y) is often called the distance between x and y in the metric d. Given \epsilon >0, consider the set

\displaystyle B_{d}(x,\epsilon)=\{y|d(x,y)<\epsilon\}

of all points у whose distance from x is less than \epsilon. It is called the \epsilon-ball centered at x. Sometimes we omit the metric d from the notation and write this ball simply as B(x,\epsilon) when no confusion will arise.

Finally, once more from the same book, we have the definition of the metric topology:

If d is a metric on the set X, then the collection of all \epsilon-balls B_{d}(x,\epsilon), for x\in X and \epsilon>0, is a basis for a topology on X, called the metric topology induced by d.

We recall that the basis of a topology is a collection of open sets such that every other open set can be described as a union of the elements of this collection. A set with a specific metric that makes it into a topological space with the metric topology is called a metric space.

An example of a metric on the set \mathbb{R}^{n} is given by the ordinary “distance formula”:

\displaystyle d(x,y)=\sqrt{\sum_{i=1}^{n}(x_{i}-y_{i})^{2}}

Note: We have followed the notation of the book of Munkres, which may be different from the usual notation. Here x and y are two different points on \mathbb{R}^{n}, and x_{i} and y_{i} are their respective coordinates.

The above metric is not the only one possible however. There are many others. For instance, we may simply put

\displaystyle d(x,y)=0 if \displaystyle x=y

\displaystyle d(x,y)=1 if \displaystyle x\neq y.

This is called the discrete metric, and one may check that it satisfies the definition of a metric. One may think of it as something that simply specifies the distance from a point to itself as “near”, and the distance to any other point that is not itself as “far”. There is also the taxicab metric, given by the following formula:

\displaystyle d(x,y)=\sum_{i=1}^{n}|x_{i}-y_{i}|

One way to think of the taxicab metric, which reflects the origins of the name, is that it is the “distance” important to taxi drivers (needed to calculate the fare) in a certain city with perpendicular roads. The ordinary distance formula is not very helpful since one needs to stay on the roads – therefore, for example, if one needs to go from point x to point y which are on opposite corners of a square, the distance traversed is not equal to the length of the diagonal, but is instead equal to the length of two sides. Again, one may check that the taxicab metric satisfies the definition of a metric.

II. Norm

Now we move on to vector spaces (we will consider in this post only vector spaces over the real or complex numbers), and some mathematical concepts that we can associate with them, as suggested in the beginning of this post. Being a set closed under addition and scalar multiplication is already a useful concept, as we have seen, but we can still add on some ideas that would make them even more interesting. The notion of metric that we have discussed earlier will show up repeatedly over this discussion.

We first discuss the notion of a norm, which gives us a notion of a “magnitude” of a vector. We quote from the book Introductory Functional Analysis with Applications by Erwin Kreyszig for the definition:

A norm on a (real or complex) vector space X is a real valued function on X whose value at an x\in X is denoted by

\displaystyle \|x\|    (read “norm of x“)

and which has the properties

(N1) \|x\|\geq 0

(N2) \|x\|=0\iff x=0

(N3) \|\alpha x\|=|\alpha|\|x\|

(N4) \|x+y\|\leq\|x\|+\|y\|    (triangle inequality)

here x and y are arbitrary vectors in X and \alpha is any scalar.

A vector space with a specified norm is called a normed space.

A norm automatically provides a vector space with a metric; in other words, a normed space is always a metric space. The metric is given in terms of the norm by the following equation:

\displaystyle d(x,y)=\|x-y\|

However, not all metrics come from a norm. An example is the discrete metric, which satisfies the properties of the metric but not the norm.

III. Inner Product

Next we discuss the inner product. The inner product gives us a notion of “orthogonality”, a concept which we already saw in action in Some Basics of Fourier Analysis. Intuitively, when two vectors are “orthogonal”, they are “perpendicular” in some sense. However, our geometric intuition may not be as useful when we are discussing, say, the infinite-dimensional vector space whose elements are functions. For this we need a more abstract notion of orthogonality, which is embodied by the inner product. Again, for the technical definition we quote from the book of Kreyszig:

With every pair of vectors x and y there is associated a scalar which is written

\displaystyle \langle x,y\rangle

and is called the inner product of x and y, such that for all vectors x, y, z and scalars \alpha we have

(IPl) \langle x+y,z\rangle=\langle x,z\rangle+\langle y,z\rangle

(IP2) \langle \alpha x,y\rangle=\alpha\langle x,y\rangle

(IP3) \langle x,y\rangle=\overline{\langle y,x\rangle}

(IP4) \langle x,x\rangle\geq 0,    \langle x,x\rangle=0 \iff x=0

A vector space with a specified inner product is called an inner product space.

One of the most basic examples, in the case of a finite-dimensional vector space, is given by the following procedure. Let x and y be elements (vectors) of some n-dimensional real vector space X, with respective components x_{1}, x_{2},...,x_{n} and y_{1},y_{2},...,y_{n} in some basis. Then we can set

\displaystyle \langle x,y\rangle=x_{1}y_{1}+x_{2}y_{2}+...+x_{n}y_{n}

This is the familiar “dot product” taught in introductory university-level mathematics courses.

Let us now see how the inner product gives us a notion of “orthogonality”. To make things even easier to visualize, let us set n=2, so that we are dealing with vectors (which we can now think of as quantities with magnitude and direction) in the plane. A unit vector x pointing “east” has components x_{1}=1, x_{2}=0, while a unit vector y pointing “north” has components y_{1}=0, y_{2}=1. These two vectors are perpendicular, or orthogonal. Computing the inner product we discussed earlier, we have

\displaystyle \langle x,y\rangle=(1)(0)+(0)(1)=0.

We say, therefore, that two vectors are orthogonal when their inner product is zero. As we have mentioned earlier, we can extend this to cases where our geometric intuition may no longer be as useful to us. For example, consider the infinite dimensional vector space of (real-valued) functions which are “square integrable” over some interval (if we square them and integrate over this interval, we have a finite answer), say [0,1]. We set our inner product to be

\displaystyle \int_{0}^{1}f(x)g(x)dx.

As an example, let f(x)=\text{cos}(2\pi x) and g(x)=\text{sin}(2\pi x). We say that these functions are “orthogonal”, but it is hard to imagine in what way. But if we take the inner product, we will see that

\displaystyle \int_{0}^{1}\text{cos}(2\pi x)\text{sin}(2\pi x)dx=0.

Hence we see that \text{cos}(2\pi x) and \text{sin}(2\pi x) are orthogonal. Similarly, we have

\displaystyle \int_{0}^{1}\text{cos}(2\pi x)\text{cos}(4\pi x)dx=0

and \text{cos}(2\pi x) and \text{cos}(4\pi x) are also orthogonal. We have discussed this in more detail in Some Basics of Fourier Analysis. We have also seen in that post that orthogonality plays a big role in the subject of Fourier analysis.

Just as a norm always induces a metric, an inner product also induces a norm, and by extension also a metric. In other words, an inner product space is also a normed space, and also a metric space. The norm is given in terms of the inner product by the following expression:

\displaystyle \|x\|=\sqrt{\langle x,x\rangle}

Just as with the norm and the metric, although an inner product always induces a norm, not every norm is induced by an inner product.

IV. Banach Spaces and Hilbert Spaces

There is one more concept I want to discuss in this post. In Valuations and Completions, we discussed Cauchy sequences and completions. Those concepts still carry on here, because they are actually part of the study of metric spaces (in fact, the valuations discussed in that post actually serve as a metric on the fields that were discussed, showing how in number theory the concept of metric and metric spaces still make an appearance). If every Cauchy sequence in a metric space X converges to an element in X, then we say that X is a complete metric space.

Since normed spaces and inner product spaces are also metric spaces, the notion of a complete metric space still makes sense, and we have special names for them. A normed space which is also a complete metric space is called a Banach space, while an inner product space which is also a complete metric space is called a Hilbert space. Finite-dimensional vector spaces (over the real or complex numbers) are always complete, and therefore we only really need the distinction when we are dealing with infinite dimensional vector spaces.

Banach spaces and Hilbert spaces are important in quantum mechanics. We recall in Some Basics of Quantum Mechanics that the possible states of a system in quantum mechanics form a vector space. However, more is true – they actually form a Hilbert space, and the states that we can observe “classically” are orthogonal to each other. The Dirac “bra-ket” notation that we have discussed makes use of the inner product to express probabilities.

Meanwhile, Banach spaces often arise when studying operators, which correspond to observables such as position and momentum. Of course the states form Banach spaces too, since all Hilbert spaces are Banach spaces, but there is much motivation to study the Banach spaces formed by the operators as well instead of just that formed by the states. This is an important aspect of the more mathematically involved treatments of quantum mechanics.

References:

Topological Vector Space on Wikipedia

Functional Analysis on Wikipedia

Metric on Wikipedia

Norm on Wikipedia

Inner Product Space on Wikipedia

Complete Metric Space on Wikipedia

Banach Space on Wikipedia

Hilbert Space on Wikipedia

A Functional Analysis Primer on Bahcemizi Yetistermeliyiz

Topology by James R. Munkres

Introductory Functional Analysis with Applications by Erwin Kreyszig

Real Analysis by Halsey Royden

Some Basics of (Quantum) Electrodynamics

There are only four fundamental forces as far as we know, and every force that we know of can ultimately be considered as manifestations of these four. These four are electromagnetism, the weak nuclear force, the strong nuclear force, and gravity. Among them, the one we are most familiar with is electromagnetism, both in terms of our everyday experience (where it is somewhat on par with gravity) and in terms of our physical theories (where our understanding of electrodynamics is far ahead of our understanding of the other three forces, including, and especially, gravity).

Electromagnetism is dominant in everyday life because the weak and strong nuclear forces have a very short range, and because gravity is very weak. Now gravity doesn’t seem weak at all, especially if we have experienced falling on our face at some point in our lives. But that’s only because the “source” of this gravity, our planet, is very large. But imagine a small pocket-sized magnet lifting, say an iron nail, against the force exerted by the Earth’s gravity. This shows how much stronger the electromagnetic force is compared to gravity. Maybe we should be thankful that gravity is not on the same level of strength, or falling on our face would be so much more painful.

It is important to note also, that atoms, which make up everyday matter, are themselves made up of charged particles – electrons and protons (there are also neutrons, which are uncharged). Electromagnetism therefore plays an important part, not only in keeping the “parts” of an atom together, but also in “joining” different atoms together to form molecules, and other larger structures like crystals. It might be gravity that keeps our planet together, but for less massive objects like a house, or a car, or a human body, it is electromagnetism that keeps them from falling apart.

Aside from electromagnetism being the one fundamental force we are most familiar with, another reason to study it is that it is the “template” for our understanding of the rest of the fundamental forces, including gravity. In Einstein’s general theory of relativity, gravity is the curvature of spacetime; it appears that this gives it a nature different from the other fundamental forces. But even then, the expression for this curvature, in terms of the Riemann curvature tensor, is very similar in form to the equation for the electromagnetic fields in terms of the field strength tensor.

The electromagnetic fields, which we shall divide into the electric field and the magnetic field, are vector fields (see Vector Fields, Vector Bundles, and Fiber Bundles), which means that they have a value (both magnitude and direction) at every point in space. A charged particle in an electric or magnetic field (or both) will experience a force according to the Lorentz force law:

\displaystyle F_{x}=q(E_{x}+v_{y}B_{z}-v_{z}B_{y})

\displaystyle F_{y}=q(E_{y}+v_{z}B_{x}-v_{x}B_{z})

\displaystyle F_{z}=q(E_{z}+v_{x}B_{y}-v_{y}B_{x})

where F_{x}, F_{y}, and F_{z} are the three components of the force, in the x, y, and z direction, respectively; E_{x}, E_{y}, and E_{z} are the three components of the electric field;  B_{x}, B_{y}, B_{z} are the three components of the magnetic field; v_{x}, v_{y}, v_{z} are the three components of the velocity of the particle, and q is its charge. Newton’s second law (see My Favorite Equation in Physics) gives us the motion of an object given the force acting on it (and its mass), so together with the Lorentz force law, we can determine the motion of charged particles in electric and magnetic fields.

The Lorentz force law is extremely important in electrodynamics and we will keep the following point in mind throughout this discussion:

The Lorentz force law tells us how charges move under the influence of electric and magnetic fields.

Instead of discussing electrodynamics in terms of these fields, however, we will instead focus on the electric and magnetic potentials, which together form what is called the four-potential and are related to the fields in terms of the following equations:

\displaystyle E_{x}=-\frac{1}{c}\frac{\partial A_{x}}{\partial t}-\frac{\partial A_{t}}{\partial x}

\displaystyle E_{y}=-\frac{1}{c}\frac{\partial A_{y}}{\partial t}-\frac{\partial A_{t}}{\partial y}

\displaystyle E_{z}=-\frac{1}{c}\frac{\partial A_{z}}{\partial t}-\frac{\partial A_{t}}{\partial z}

\displaystyle B_{x}=\frac{\partial A_{z}}{\partial y}-\frac{\partial A_{y}}{\partial z}

\displaystyle B_{y}=\frac{\partial A_{x}}{\partial z}-\frac{\partial A_{z}}{\partial x}

\displaystyle B_{z}=\frac{\partial A_{y}}{\partial x}-\frac{\partial A_{x}}{\partial y}

The values of the potentials, as functions of space and time, are related to the distribution of charges and currents by the very famous set of equations called Maxwell’s equations:

\displaystyle -\frac{\partial^{2} A_{t}}{\partial x^{2}}-\frac{\partial^{2} A_{t}}{\partial y^{2}}-\frac{\partial^{2} A_{t}}{\partial z^{2}}-\frac{\partial^{2} A_{x}}{\partial t\partial x}-\frac{\partial^{2} A_{y}}{\partial t\partial y}-\frac{\partial^{2} A_{z}}{\partial t\partial z}=\frac{4\pi}{c}J_{t}

\displaystyle \frac{1}{c^{2}}\frac{\partial^{2} A_{x}}{\partial t^{2}}-\frac{\partial^{2} A_{x}}{\partial y^{2}}-\frac{\partial^{2} A_{x}}{\partial z^{2}}+\frac{1}{c}\frac{\partial^{2} A_{t}}{\partial x\partial t}+\frac{\partial^{2} A_{y}}{\partial x\partial y}+\frac{\partial^{2} A_{z}}{\partial x\partial z}=\frac{4\pi}{c}J_{x}

\displaystyle -\frac{\partial^{2} A_{y}}{\partial x^{2}}+\frac{1}{c^{2}}\frac{\partial^{2} A_{y}}{\partial t^{2}}-\frac{\partial^{2} A_{y}}{\partial z^{2}}+\frac{\partial^{2} A_{x}}{\partial y\partial x}+\frac{1}{c}\frac{\partial^{2} A_{t}}{\partial t\partial y}+\frac{\partial^{2} A_{z}}{\partial y\partial z}=\frac{4\pi}{c}J_{y}

\displaystyle -\frac{\partial^{2} A_{z}}{\partial x^{2}}-\frac{\partial^{2} A_{z}}{\partial y^{2}}+\frac{1}{c^{2}}\frac{\partial^{2} A_{z}}{\partial t^{2}}+\frac{\partial^{2} A_{x}}{\partial z\partial x}+\frac{\partial^{2} A_{y}}{\partial z\partial y}+\frac{1}{c}\frac{\partial^{2} A_{t}}{\partial z\partial t}=\frac{4\pi}{c}J_{z}

Some readers may be more familiar with Maxwell’s equations written in terms of the electric and magnetic fields; in that case, they have individual names: Gauss’ lawGauss’ law for magnetismFaraday’s law, and Ampere’s law (with Maxwell’s addition). When written down in terms of the fields, they can offer more physical intuition – for instance, Gauss’ law for magnetism tells us that the magnetic field has no “divergence”, and is always “solenoidal”. However, we leave this approach to the references for the moment, and focus on the potentials, which will be more useful for us when we relate our discussion to quantum mechanics later on. We will, however, always remind ourselves of the following important point:

Maxwell’s laws tells us the configuration and evolution of the electric and magnetic fields (possibly via the potentials) under the influence of sources (charge and current distributions).

There is one catch (an extremely interesting one) that comes about when dealing with potentials instead of fields. It is called gauge freedom, and is one of the foundations of modern particle physics. However, we will not discuss it in this post. Our equations will remain correct, so the reader need not worry; gauge freedom is not a constraint, but is instead a kind of “symmetry” that will have some very interesting consequences. This concept is left to the references for now, however it is hoped that it will at some time be discussed in this blog.

The way we have wrote down Maxwell’s equations is rather messy. However, we can introduce some notation to write them in a more elegant form. We use what is known as tensor notation; however we will not discuss the concept of tensors in full here. We will just note that because the formula for the spacetime interval contains a sign different from the others, we need two different types of indices for our vectors. The so-called contravariant vectors will be indexed by a superscript, while the so-called covariant vectors will be indexed by a subscript. “Raising” and “lowering” these indices will involve a change in sign for some quantities; we will indicate them explicitly here.

Let x^{0}=ctx^{1}=xx^{2}=yx^{3}=z. Then we will adopt the following notation:

\displaystyle \partial_{\mu}=\frac{\partial}{\partial x^{\mu}}

\displaystyle \partial^{\mu}=\frac{\partial}{\partial x^{\mu}} for \mu=0

\displaystyle \partial^{\mu}=-\frac{\partial}{\partial x^{\mu}} for \mu\neq 0

Let A^{0}=A_{t}A^{1}=A_{x}A^{2}=A_{y}A^{3}=A_{z}. Then Maxwell’s equations can be written as

\displaystyle \sum_{\mu=0}^{3}\partial_{\mu}(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})=\frac{4\pi}{c}J^{\nu}.

We now introduce the so-called Einstein summation convention. Note that the summation is performed over the index that is repeated; and that that one of these indices is a superscript and the other is a subscript. Albert Einstein noticed that almost all summations in his calculations happen in this way, so he adopted the convention that instead of explicitly writing out the summation sign, repeated indices (one superscript and one subscript) would instead indicate that a summation should be performed. Like most modern references, we adopt this notation, and only explicitly say so when there is an exception. This allows us to write Maxwell’s equations as

\displaystyle \partial_{\mu}(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})=\frac{4\pi}{c}J^{\nu}.

We can also use the Einstein summation convention to rewrite other important expressions in physics in more compact form. In particular, it allows us to rewrite the Dirac equation (see Some Basics of Relativistic Quantum Field Theory) as follows:

\displaystyle i\hbar\gamma^{\mu}\partial_{\mu}\psi-mc\psi=0

We now go to the quantum realm and discuss the equations of motion of quantum electrodynamics. Let A_{0}=A_{t}A_{1}=-A_{x}A_{2}=-A_{y}A_{3}=-A_{z}. These equations are given by

\displaystyle \displaystyle i\hbar\gamma^{\mu}\partial_{\mu}\psi-mc\psi=\frac{q}{c}A_{\mu}\psi

\displaystyle \partial_{\mu}(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})=4\pi q\bar{\psi}\gamma^{\mu}\psi

What do these two equations mean?

The first equation looks like the Dirac equation, except that on the right hand side we have a term with both the “potential” (which we now call the Maxwell field, or the Maxwell field operator), the Dirac “wave function” for a particle such as an electron (which, as we have discussed in Some Basics of Relativistic Quantum Field Theory, is actually the Dirac field operator which operates on the “vacuum” state to describe a state with a single electron), as well as the charge. It describes the “motion” of the Dirac field under the influence of the Maxwell field. Hence, this is the quantum mechanical version of the Lorentz force law.

The second equation is none other than our shorthand version of Maxwell equations, and on the right hand side is an explicit expression for the current in terms of the Dirac field and some constants. The symbol \bar{\psi} refers to the “adjoint” of the Dirac field; actually the Dirac field itself has components, although, because of the way it transforms under rotations, we usually do not refer to it as a vector. Hence it can be written as a column matrix (see Matrices), and has a “transpose” which is a row matrix; the “adjoint” is given by the “conjugate transpose” which is a row matrix where all the entries are the complex conjugates of the transpose of the Dirac field.

In general relativity there is this quote, from the physicist John Archibald Wheeler: “Spacetime tells matter how to move; matter tells spacetime how to curve”. One can perhaps think of electrodynamics, whether classical or quantum, in a similar way. Fields tell charges and currents how to move, charges and currents tell fields how they are supposed to be “shaped”. And this is succinctly summarized by the Lorentz force law and Maxwell’s equations, again whether in its classical or quantum version.

As we have seen in Lagrangians and Hamiltonians, the equations of motion are not the only way we can express a physical theory. We can also use the language of Lagrangians and Hamiltonians. In particular, an important quantity in quantum mechanics that involves the Lagrangian and Hamiltonian is the probability amplitude. In order to calculate the probability amplitude, the physicist Richard Feynman developed a method involving the now famous Feynman diagrams, which can be though of as expanding the exponential function (see “The Most Important Function in Mathematics”) in the expression for the probability amplitude and expressing the different terms using diagrams. Just as we have associated the Dirac field to electrons, the Maxwell field is similarly associated to photons. Expressions involving the Dirac field and the Maxwell field can be thought of as electrons “emitting” or “absorbing” photons, or electrons and positrons (the antimatter counterpart of electrons) annihilating each other and creating a photon. The calculated probability amplitudes can then be used to obtain quantities that can be compared to results obtained from experiment, in order to verify the theory.

References:

Lorentz Force on Wikipedia

Electromagnetic Four-Potential on Wikipedia

Maxwell’s Equations on Wikipedia

Quantum Electrodynamics on Wikipedia

Featured Image Produced by CERN

The Douglas Robb Memorial Lectures by Richard Feynman

QED: The Strange Theory of Light and Matter by Richard Feynman

Introduction to Electrodynamics by David J. Griffiths

Introduction to Elementary Particle Physics by David J. Griffiths

Quantum Field Theory by Fritz Mandl and Graham Shaw

Introduction to Quantum Field Theory by Michael Peskin and Daniel V. Schroeder

Some Basics of Relativistic Quantum Field Theory

So far, on this blog, we have introduced the two great pillars of modern physics, relativity (see From Pythagoras to Einstein) and quantum mechanics (see Some Basics of Quantum Mechanics and More Quantum Mechanics: Wavefunctions and Operators). Although a complete unification between these two pillars is yet to be achieved, there already exists such a unified theory in the special case when gravity is weak, i.e. spacetime is flat. This unification of relativity (in this case special relativity) and quantum mechanics is called relativistic quantum field theory, and we discuss the basic concepts of it in this post.

In From Pythagoras to Einstein, we introduced the formula at the heart of Einstein’s theory of relativity. It is very important to modern physics and is worth writing here again:

\displaystyle -(c\Delta t)^2+(\Delta x)^2+(\Delta y)^2+(\Delta z)^2=(\Delta s)^2

This holds only for flat spacetime, however, even in general relativity, where spacetime may be curved, a “local” version still holds:

\displaystyle -(cdt)^2+(dx)^2+(dy)^2+(dz)^2=(ds)^2

The notation comes from calculus (see An Intuitive Introduction to Calculus), and means that this equation holds when the quantities involved are very small.

In this post, however, we shall consider a flat spacetime only. Aside from being “locally” true, as far as we know, in regions where the gravity is not very strong (like on our planet), spacetime is pretty much actually flat.

We recall how we obtained the important equation above; we made an analogy with the distance between two objects in 3D space, and noted how this distance does not change with translation and rotation; if we are using different coordinate systems, we may disagree about the coordinates of the two objects, but even then we will always agree on the distance between them. This distance is therefore “invariant”. But we live not only in a 3D space but in a 4D spacetime, and instead of an invariant distance we have an invariant spacetime interval.

But even in nonrelativistic mechanics, the distance is not the only “invariant”. We have the concept of velocity of an object. Again, if we are positioned and oriented differently in space, we may disagree about the velocity of the object, for me it may be going to the right, and forward away from me; for you it may in front of you and going straight towards you. However, we will always agree about the magnitude of this velocity, also called its speed.

The quantity we call the momentum is related to the velocity of the object; in fact for simple cases it is simply the mass of the object multiplied by the velocity. Once again, two observers may disagree about the momentum, since it involves direction; however they will always agree about the magnitude of the momentum. This magnitude is therefore also invariant.

The velocity, and by extension the momentum, has three components, one for each dimension of space. We write them as v_{x}, v_{y}, and v_{z} for the velocity and p_{x}, p_{y}, and p_{z} for the momentum.

What we want now is a 4D version of the momentum. Three of its components will be the components we already know of, p_{x}, p_{y}, and p_{z}. So we just need its “time” component, and the “magnitude” of this momentum is going to be an invariant.

It turns out that the equation we are looking for is the following (note the similarity of its form to the equation for the spacetime interval):

\displaystyle -\frac{E^{2}}{c^{2}}+p_{x}^{2}+p_{y}^{2}+p_{z}^{2}=-m^{2}c^{2}

The quantity m is the invariant we are looking for (The factors of c are just constants anyway), and it is called the “rest mass” of the object. As an effect of the unity of spacetime, the mass of an object as seen by an observer actually changes depending on its motion with respect to the observer; however, by definition, the rest mass is the mass of an object as seen by the observer when it is not moving with respect to the observer, therefore, it is an invariant. The quantity E stands for the energy.

Also, when the object is not moving with respect to us, we see no momentum in the x, y, or z direction, and the equation becomes E=mc^{2}, which is the very famous mass-energy equivalence which was published by Albert Einstein during his “miracle year” in 1905.

We now move on to quantum mechanics. In quantum mechanics our observables, such as the position, momentum, and energy, correspond to self-adjoint operators (see More Quantum Mechanics: Wavefunctions and Operators), whose eigenvalues are the values that we obtain when we perform a measurement of the observable corresponding to the operator.

The “momentum operator” (to avoid confusion between ordinary quantities and operators, we will introduce here the “hat” symbol on our operators) corresponding to the x component of the momentum is given by

\displaystyle \hat{p_{x}}=-i\hbar\frac{\partial}{\partial x}

The eigenvalue equation means that when we measure the x component of the momentum of a quantum system in the state represented by the wave function \psi(x,y,z,t), which is an eigenvector of the momentum operator, then then the measurement will yield the value p_{x}, where p_{x} is the eigenvalue correponding to \psi(x,y,z,t) (see Eigenvalues and Eigenvectors), i.e.

\displaystyle -i\hbar\frac{\partial \psi(x,y,z,t)}{\partial x}=p_{x}\psi(x,y,z,t)

Analogues exist of course for the y and z components of the momentum.

Meanwhile, we also have an energy operator given by

\displaystyle \hat{E}=i\hbar\frac{\partial}{\partial t}

To obtain a quantum version of the important equation above relating the energy, momentum, and the mass, we need to replace the relevant quantities by the corresponding operators acting on the wave function. Therefore, from

\displaystyle -\frac{E^{2}}{c^{2}}+p_{x}^{2}+p_{y}^{2}+p_{z}^{2}=-m^{2}c^{2}

we obtain an equation in terms of operators

\displaystyle -\frac{\hat{E}^{2}}{c^{2}}+\hat{p}_{x}^{2}+\hat{p}_{y}^{2}+\hat{p}_{z}^{2}=-m^{2}c^{2}

or explicitly, with the wavefunction,

\displaystyle \frac{\hbar^{2}}{c^{2}}\frac{\partial^{2}\psi}{\partial t^{2}}-\hbar^{2}\frac{\partial^{2}\psi}{\partial x^{2}}-\hbar^{2}\frac{\partial^{2}\psi}{\partial y^{2}}-\hbar^{2}\frac{\partial^{2}\psi}{\partial z^{2}}=-m^{2}c^{2}\psi.

This equation is called the Klein-Gordon equation.

The Klein-Gordon equation is a second-order differential equation. It can be “factored” in order to obtain two first-order differential equations, both of which are called the Dirac equation.

We elaborate more on what we mean by “factoring”. Suppose we have a quantity which can be written as a^{2}-b^{2}. From basic high school algebra, we know that we can “factor” it as (a+b)(a-b). Now suppose we have p_{x}=p_{y}=p_{z}=0. We can then write the Klein-Gordon equation as

\frac{E^{2}}{c^{2}}-m^{2}c^{2}=0

which factors into

(\frac{E}{c}-mc)(\frac{E}{c}+mc)=0

or

\frac{E}{c}-mc=0

\frac{E}{c}+mc=0

These are the kinds of equations that we want. However, the case where the momentum is nonzero complicates things. The solution of the physicist Paul Dirac was to introduce matrices (see Matrices) as coefficients. These matrices (there are four of them) are 4\times 4 matrices with complex coefficients, and are explicitly written down as follows:

\displaystyle \gamma^{0}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}\right)

\displaystyle \gamma^{1}=\left(\begin{array}{cccc}0&0&0&1\\ 0&0&1&0\\0&-1&0&0\\-1&0&0&0\end{array}\right)

\displaystyle \gamma^{2}=\left(\begin{array}{cccc}0&0&0&-i\\ 0&0&i&0\\0&i&0&0\\-i&0&0&0\end{array}\right)

\displaystyle \gamma^{3}=\left(\begin{array}{cccc}0&0&1&0\\ 0&0&0&-1\\-1&0&0&0\\0&1&0&0\end{array}\right).

Using the laws of matrix multiplication, one can verify the following properties of these matrices (usually called gamma matrices):

(\gamma^{0})^{2}=1

(\gamma^{1})^{2}=(\gamma^{2})^{2}=(\gamma^{3})^{2}=-1

\gamma^{\mu}\gamma^{\nu}=-\gamma^{\mu}\gamma^{\nu} for \mu\neq\nu.

With the help of these properties, we can now factor the Klein-Gordon equation as follows:

\displaystyle \frac{\hat{E}^{2}}{c^{2}}-\hat{p}_{x}^{2}-\hat{p}_{y}^{2}-\hat{p}_{z}^{2}-m^{2}c^{2}=0

\displaystyle (\gamma^{0}\frac{\hat{E}}{c}-\gamma^{1}\hat{p}_{x}-\gamma^{2}\hat{p}_{y}-\gamma^{3}\hat{p}_{z}+mc)(\gamma^{0}\frac{\hat{E}}{c}-\gamma^{1}\hat{p}_{x}-\gamma^{2}\hat{p}_{y}-\gamma^{3}\hat{p}_{z}-mc)=0

\displaystyle \gamma^{0}\frac{\hat{E}}{c}-\gamma^{1}\hat{p}_{x}-\gamma^{2}\hat{p}_{y}-\gamma^{3}\hat{p}_{z}+mc=0

\displaystyle \gamma^{0}\frac{\hat{E}}{c}-\gamma^{1}\hat{p}_{x}-\gamma^{2}\hat{p}_{y}-\gamma^{3}\hat{p}_{z}-mc=0

Both of the last two equations are known as the Dirac equation, although for purposes of convention, we usually use the last one. Writing the operators and the wave function explicitly, this is

\displaystyle i\hbar\gamma^{0}\frac{\partial\psi}{c\partial t}+i\hbar\gamma^{1}\frac{\partial\psi}{\partial x}+i\hbar\gamma^{2}\frac{\partial\psi}{\partial y}+i\hbar\gamma^{3}\frac{\partial\psi}{\partial z}-mc\psi=0

We now have the Klein-Gordon equation and the Dirac equation, both of which are important in relativistic quantum field theory. In particular, the Klein-Gordon equation is used for “scalar” fields while the Dirac equation is used for “spinor” fields. This is related to how they “transform” under rotations (which, in relativity, includes “boosts” – rotations that involve both space and time). A detailed discussion of these concepts will be left to the references for now and will perhaps be tackled in future posts.

We will, however, mention one more important (and interesting) phenomenon in relativistic quantum mechanics. The equation E=mc^{2} allows for the “creation” of particle-antiparticle pairs out of seemingly nothing! Even when there seems to be “not enough energy”, there exists an “energy-time uncertainty principle”, which allows such particle-antiparticle pairs to exist, even for only a very short time. This phenomenon of “creation” (and the related phenomenon of “annihilation”) means we cannot take the number of particles in our system to be fixed.

With this, we need to modify our language to be able to describe a system with varying numbers of particles. We will still use the language of linear algebra, but we will define our “states” differently. In earlier posts in the blog, where we only dealt with a single particle, the “state” of the particle simply gave us information about the position. In the relativistic case (and in other cases where there are varying numbers of particles – for instance, when the system “gain” or “loses” particles from the environment), the number (and kind) of particles need to be taken into account.

We will do this as follows. We first define a state with no particles, which we shall call the “vacuum”. We write it as |0\rangle. Recall that an operator is a function from state vectors to state vectors, hence, an operator acting on a state is another state. We now define a new kind of operator, called the “field” operator \psi, such that the state with a single particle of a certain type, which would have been given by  the wave function \psi in the old language, is now described by the state vector \psi|0\rangle.

Important note: The symbol \psi no longer refers to a state vector, but an operator! The state vector is \psi|0\rangle.

The Klein-Gordon and the Dirac equations still hold of course (otherwise we wouldn’t even have bothered to write them here). It is just important to take note that the symbol \psi now refers to an operator and not a state vector. We might as well write it as \hat{\psi}, but this usually not done in the literature since we will not use \psi for anything else other than to refer to the field operator. Further, if we have a state with several particles, we can write \psi\phi...\theta|0\rangle. This new language is called second quantization, which does not mean “quantize for a second time”, but rather a second version of quantization, since the first version did not have the means to deal with varying numbers of particles.

We have barely scratched the surface of relativistic quantum field theory in this post. Even though much has been made about the quest to unify quantum mechanics and general relativity, there is so much that also needs to be studied in relativistic quantum field theory, and still many questions that need to be answered. Still, relativistic quantum field theory has had many impressive successes – one striking example is the theoretical formulation of the so-called Higgs mechanism, and its experimental verification almost half a century later. The success of relativistic quantum field theory also gives us a guide on how to formulate new theories of physics in the same way that F=ma guided the development of the very theories that eventually replaced it.

The reader is encouraged to supplement what little exposition has been provided in this post by reading the references. The books are listed in increasing order of sophistication, so it is perhaps best to read them in that order too, although The Road to Reality: A Complete Guide to the Laws of Reality by Roger Penrose is a high-level popular exposition and not a textbook, so it is perhaps best read in tandem with Introduction to Elementary Particles by David J. Griffiths, which is a textbook, although it does have special relativity and basic quantum mechanics as prerequisites. One may check the references listed in the blog posts discussing these respective subjects.

References:

Quantum Field Theory on Wikipedia

Klein-Gordon Equation on Wikipedia

Dirac Equation on Wikipedia

Second Quantization on Wikipedia

Featured Image Produced by CERN

The Road to Reality: A Complete Guide to the Laws of Reality by Roger Penrose

Introduction to Elementary Particles by David J. Griffiths

Quantum Field Theory by Fritz Mandl and Graham Shaw

Introduction to Quantum Field Theory by Michael Peskin and Daniel V. Schroeder

Lagrangians and Hamiltonians

We discussed the Lagrangian and Hamiltonian formulations of physics in My Favorite Equation in Physics, in our discussion of the historical development of classical physics right before the dawn of the revolutionary ideas of relativity and quantum mechanics at the turn of the 20th century. In this post we discuss them further, and more importantly, we provide some examples.

In order to discuss Lagrangians and Hamiltonians we first need to discuss the concept of energy. Energy is a rather abstract concept, but it can perhaps best be described as a certain conserved quantity – historically, this was how energy was thought of, and the motivation for its development under Rene Descartes and Gottfried Wilhelm Liebniz.

Consider for example, a stone at some height h above the ground. From this we can compute a quantity called the potential energy (which we will symbolize by V), which is going to be, in our case, given by

\displaystyle V=mgh

where m is the mass of the stone and g is the acceleration due to gravity, which close to the surface of the earth can be considered a constant roughly equal to 9.81 meters per second per second.

As the stone is dropped from that height, it starts to pick up speed. As it height decreases, its potential energy will also decrease. However, it will gain an increase in a certain quantity called the kinetic energy, which we will write as T and define as

\displaystyle T=\frac{1}{2}mv^{2}

where v is the magnitude of the velocity. In our case, since we are considering only motion in one dimension, this is simply given by the speed of the stone. At any point in the motion of the stone, however, the sum of the potential energy and the kinetic energy, called the total mechanical energy, stays at the same value. This is because as the amount by which the potential energy decreases is the same as the amount by which the kinetic energy decreases.

The expression for kinetic energy remains the same for any nonrelativistic system. The expression for the potential energy depends on the system, however, and is related to the force as follows:

\displaystyle F=-\frac{dV}{dx}.

We now give the definition of the quantity called the Lagrangian (denoted by L). It is simply given by

\displaystyle L=T-V.

There is a related quantity to the Lagrangian, called the action (denoted by S). It is defined as

\displaystyle S=\int_{t_{1}}^{t_{2}}L dt.

For a single particle, the Lagrangian depends on the position and the velocity of the particle. More generally, it will depend on the so-called “configuration” of the system, as well as the “rate of change” of this configuration. We will represent these variables by q and \dot{q} respectively (the “dot” notation is the one developed by Isaac Newton to represent the derivative with respect to time; in the notation of Liebniz, which we have used up to now, this is also written as \frac{dq}{dt}).

To explicitly show this dependence, we write the Lagrangian as L(q,\dot{q}). Therefore we shall write the action as follows:

\displaystyle S=\int_{t_{1}}^{t_{2}}L(q,\dot{q}) dt.

The Lagrangian formulation is important because it allows us to make a connection with Fermat’s principle in optics, which is the following statement:

Light always moves in such a way that it minimizes its time of travel.

Essentially, the Lagrangian formulation allows us to restate the good old Newton’s second law of motion as follows:

An object always moves in such a way that it minimizes its action.

In order to make calculations out of this “principle”, we have to make use of the branch of mathematics called the calculus of variations, which was specifically developed to deal with problems such as these. The calculations are fairly involved, but we will end up with the so-called Euler-Lagrange equations:

\displaystyle \frac{\partial L}{\partial q}-\frac{d}{dt}\frac{\partial L}{\partial\dot{q}}=0

We are using the notation \frac{d}{dt}\frac{\partial L}{\partial\dot{q}} instead of the otherwise cumbersome notation \frac{d\frac{\partial L}{\partial\dot{q}}}{dt}. It is very common notation in physics to write \frac{d}{dt} to refer to the derivative “operator” (see also More Quantum Mechanics: Wavefunctions and Operators).

For a nonrelativistic system, Euler-Lagrange equations are merely a restatement of Newton’s second law; in fact we can plug in the expressions for the Lagrangian, the kinetic energy, and the potential energy we wrote down earlier and end up exactly with F=ma.

Why then, go to all the trouble of formulating this new language, just to express something that we are already familiar with? Well, aside from the “aesthetically pleasing” connection with the very elegant Fermat’s principle, there are also numerous advantages to using the Lagrangian formulation. For instance, it exposes the symmetries of the system, as well as its conserved quantities (both of which are very important in modern physics). Also, the configuration is not always simply just the position, which means that it can be used to describe systems more complicated than just a single particle. Using the concept of a Lagrangian density, it can also describe fields like the electromagnetic field.

We make a mention of the role of the Lagrangian formulation  in quantum mechanics. The probability that a system will be found in a certain state (which we write as |\phi\rangle) at time t_{2}, given that it was in a state |\psi\rangle at time t_{1}, is given by (see More Quantum Mechanics: Wavefunctions and Operators)

\displaystyle |\langle\phi|e^{-iH(t_{2}-t_{1})}|\psi\rangle|^{2}

where H is the Hamiltonian (more on this later). The quantity

\displaystyle \langle\phi|e^{-iH(t_{2}-t_{1})}|\psi\rangle

is called the transition amplitude and can be expressed in terms of the Feynman path integral

\displaystyle \int e^{iS}Dq.

This is not an ordinary integral, as may be inferred from the different notation using Dq instead of dq. What this means is that we sum the quantity inside the integral, e^{iS}, over all “paths” taken by our system. This has the rather mind blowing interpretation that in going from one point to another, a particle takes all paths. One of the best places to learn more about this concept is in the book QED: The Strange Theory of Light and Matter by Richard Feynman. This book is adapted from Feynman’s lectures at the University of Auckland, videos of which are freely and legally available online (see the references below).

We now discuss the Hamiltonian. The Hamiltonian is defined in terms of the Lagrangian L by first defining the conjugate momentum p:

\displaystyle p=\frac{\partial L}{\partial\dot{q}}.

Then the Hamiltonian H is given by the formula

\displaystyle H=p\dot{q}-L.

In contrast to the Lagrangian, which is a function of q and \dot{q}, the Hamiltonian is expressed as a function of q and p. For many basic examples the Hamiltonian is simply the total mechanical energy, with the kinetic energy T now written in terms of p instead of \dot{q} as follows:

\displaystyle T=\frac{p^{2}}{2m}.

The advantage of the Hamiltonian  formulation is that it shows how the state of the system “evolves” over time. This is given by Hamilton’s equations:

\displaystyle \dot{q}=\frac{\partial H}{\partial p}

\displaystyle \dot{p}=-\frac{\partial H}{\partial q}

These are differential equations which can be solved to know the value of q and p at any instant of time t. One can visualize this better by imagining a “phase space” whose coordinates are q and p. The state of the system is then given by a point in this phase space, and this point “moves” across the phase space according to Hamilton’s equations.

The Lagrangian and Hamiltonian formulations of classical mechanics may be easily generalized to more than one dimension. We will therefore have several different coordinates q_{i} for the configuration; for the most simple examples, these may refer to the Cartesian coordinates of 3-dimensional space, i.e. q_{1}=xq_{2}=xq_{3}=z. We summarize the important formulas here:

\displaystyle \frac{\partial L}{\partial q_{i}}-\frac{d}{dt}\frac{\partial L}{\partial\dot{q_{i}}}=0

\displaystyle H=\sum_{i}p_{i}\dot{q_{i}}-L

\displaystyle \dot{q_{i}}=\frac{\partial H}{\partial p_{i}}

\displaystyle \dot{p_{i}}=-\frac{\partial H}{\partial q_{i}}

In quantum mechanics, the Hamiltonian formulation still plays an important role. As described in More Quantum Mechanics: Wavefunctions and Operators, the Schrodinger equation describes the time evolution of the state of a quantum system in terms of the Hamiltonian. However, in quantum mechanics the Hamiltonian is not just a quantity but an operator, whose eigenvalues usually correspond to the observable values of the energy of the system.

In most modern publications discussing modern physics, the Lagrangian and Hamiltonian formulations are used, in particular for their various advantages. Although we have limited this discussion to nonrelativistic mechanics, in relativity both formulations are still very important. The equations of general relativity, also known as Einstein’s equations, may be obtained by minimizing from the Einstein-Hilbert action. Meanwhile, there also exists a Hamiltonian formulation of general relativity called the Arnowitt-Deser-Misner formalism. Even the proposed candidates for a theory of quantum gravity, string theory and loop quantum gravity, make use of these formulations (the Lagrangian formulation seems to be more dominant in string theory, while the Hamiltonian formulation is more dominant in loop quantum gravity). It is therefore vital that anyone interested in learning about modern physics be at least comfortable in the use of this language.

References:

Lagrangian Mechanics on Wikipedia

Hamiltonian Mechanics on Wikipedia

Path Integral Formulation on Wikipedia

The Douglas Robb Memorial Lectures by Richard Feynman

QED: The Strange Theory of Light and Matter by Richard Feynman

Mechanics by Lev Landau and Evgeny Lifshitz

Classical Mechanics by Herbert Goldstein

More Quantum Mechanics: Wavefunctions and Operators

In Some Basics of Quantum Mechanics, we explained the role of vector spaces (which we first discussed in Vector Spaces, Modules, and Linear Algebra) in quantum mechanics. Linear transformations, which are functions between vector spaces, would naturally be expected to also play an important role in quantum mechanics. In particular, we would like to focus on the linear transformations from a vector space to itself. In this context, they are also referred to as linear operators.

But first, we explore a little bit more the role of infinite-dimensional vector spaces in quantum mechanics. In Some Basics of Quantum Mechanics, we limited our discussion to “two-state” systems, which are also referred to as “qubits”. We can imagine a system with more “classical states”. For example, consider a row of seats in a movie theater. One can sit in the leftmost chair, the second chair from the left, the third chair from the left, and so on. But if it was a quantum system, one can sit in all chairs simultaneously, at least until one is “measured”, in which case one will be found sitting in one seat only, and the probability of being found in a certain seat is the “absolute square” of the probability amplitude, which is the coefficient of the component of the “state vector” corresponding to that seat.

The number of “classical states” of the system previously discussed is the number of chairs in the row. But if we consider, for example, just “space”, and a system composed of a single particle in this space, and whose classical state is specified by the position of the particle, the number of states of the system is infinite, even if we only consider one dimension. It can be here, there, a meter from here, 0.1 meters from here, and so on. Even if the particle is constrained on, say, a one meter interval, there is still an infinite number of positions it could be in, since there are an infinite number of numbers between 0 to 1. Hence the need for infinite-dimensional vector spaces.

As we have explained in Eigenvalues and Eigenvectors, sets of functions can provide us with an example of an infinite-dimensional vector space. We elaborate more on why functions would do well to describe a quantum system like the one we have described above. Let’s say for example that the particle is constrained to be on an interval from 0 to 1. For every point on the interval, there is a corresponding value of the probability amplitude. This is exactly the definition of a function from the interval [0,1] to the set of complex numbers \mathbb{C}. We would also have to normalize later on, although the definition of normalization for infinite-dimensional vector spaces is kind of different, involving the notion of an integral (see An Intuitive Introduction to Calculus). For that matter, the square of the probability amplitude is not the probability, but the probability density.

The function that we have described is called the wave function. It is also a vector, an element of an infinite-dimensional vector space. It is most often written using the symbol \psi(x), reflecting its nature as a function. However, since it is also a vector, we can also still use Dirac notation and write it as |\psi\rangle. The wave function is responsible for the so-called wave-particle duality of quantum mechanics, as demonstrated in the famous double-slit experiment.

We have noted in My Favorite Equation in Physics that in classical mechanics the state of a one-particle system is given by the position and momentum of that particle, whlie in quantum mechanics the wave function is enough. How can this be, since the wave function only contains information about the position? Well, actually the wave function also contains information about the momentum – this is because of the so-called de Broglie relations, which relates the momentum of a particle in quantum mechanics to its wavelength as a wave.

Actually, the wave function is a function, and does not always have to look like what we normally think of as a wave. But whatever the shape of the wave function, even if it does not look like a wave, it is always a combination of different waves. This statement is part of the branch of mathematics called Fourier analysis. The wavelengths of the different waves are related to the momentum of the corresponding particle, and we should note that like the position, they are also in quantum superposition.

There is one thing to note about this. Suppose our wave function is really a wave (technically we mean a sinusoidal wave). This wave gives us information about where we are likely to find the particle if we make a measurement – it is near the “peaks” and the “troughs” of the wave. But there are many “peaks” and “troughs” in the wave, and so it is difficult to determine where the particle will be when we measure it. On the other hand, since the wave function is composed of only one wave, we can easily determine what the momentum is.

We can also put several different waves together, resulting in a function that is “peaked” only at one place. This means there is only one place where the particle is most likely to be. But since we have combined different waves together, there will not be a single wavelength, hence, the momentum cannot be determined easily! To summarize, if we know more about the position, we know less about the momentum – and if we know more about the momentum, we know less about the position. This observation leads to the very famous Heisenberg uncertainty principle.

The many technicalities of the wave function we leave to the references for now, and proceed to the role of linear transformations, or linear operators, in quantum mechanics. We have already encountered one special role of certain kinds of linear transformations in Eigenvalues and Eigenvectors. Observables are represented by self-adjoint operators. A self-adjoint operator A is a linear operator that satisfies the condition

\displaystyle \langle Au|v\rangle=\langle u|Av\rangle.

for a vector |v\rangle and linear functional \langle u| corresponding to the vector |u\rangle. The notation |Av\rangle refers to the image of the vector |v\rangle under the linear transformation A, while \langle Au| refers to the linear functional corresponding to the vector |Au\rangle, which is the image of |u\rangle under A. The role of linear functionals in quantum mechanics was discussed in Some Basics of Quantum Mechanics.

There is, for example, an operator corresponding to the position, another corresponding to the momentum, another corresponding to the energy, and so on. If we measure any of these observables for a certain quantum system in the state |\psi\rangle, we are certain to obtain one of the eigenvalues of that observable, with the probability of obtaining the eigenvalue \lambda_{n} given by

\displaystyle |\langle \psi_{n}|\psi\rangle|^{2}

where \langle \psi_{n}| is the linear functional corresponding to the vector |\psi_{n}\rangle , which is the eigenvector corresponding to the eigenvalue \lambda_{n}. For systems like our particle in space, whose states form an infinite-dimensional vector space, the quantity above gives the probability density instead of the probability. After measurement, the state of the system “collapses” to the state given by the vector |\psi_{n}\rangle.

Another very important kind of linear operator in quantum mechanics is a unitary operator. Unitary operators are kind of like the orthogonal matrices that represent rotations (see Rotating and Reflecting Vectors Using Matrices); in fact an orthogonal matrix is a special kind of unitary operator. We note that the orthogonal matrices had the special property that they preserved the “magnitude” of vectors; unitary operators are the same, except that they are more general, since the coefficients of vectors (the scalars) in this context are complex.

More technically, a unitary operator is a linear operator U that satisfies the following condition:

\displaystyle \langle u|v\rangle=\langle Uu|Uv\rangle

with the same conventions as earlier. What this means is that the probability of finding the system in the state given by the vector |u\rangle after measurement, given that it was in the state |v\rangle before measurement remains the same if we rotate the system – or perform other “operations” represented by unitary operators such as letting time pass (time evolution), or “translating” the system to a different location.

So now we know that in quantum mechanics observables correspond to self-adjoint operators, and the “operations” of rotation, translation, and time evolution correspond to unitary operators. We might as well give a passing mention to one of the most beautiful laws of physics, Noether’s theorem, which states that the familiar “conservation laws” of physics (conservation of linear momentum, conservation of angular momentum, and conservation of energy) arise because the laws of physics do not change with translation, rotation, or time evolution. So Noether’s theorem in some way connects some of our “observables” and our “operations”.

We now revisit one of the “guiding questions” of physics, which we stated in My Favorite Equation in Physics:

“Given the state of a system at a particular time, in what state will it be at some other time?”

For classical mechanics, we can obtain the answer by solving the differential equation F=ma (Newton’s second law of motion). In quantum mechanics, we have instead the Schrodinger equation, which is the “F=ma” of the quantum realm. The Schrodinger equation can be written in the form

\displaystyle i\hbar\frac{d}{dt}|\psi(t)\rangle=H|\psi(t)\rangle

where i=\sqrt{-1} as usual, \hbar is a constant called the reduced Planck’s constant (its value is around 1.054571800\times 10^{-34} Joule-seconds), and H is a linear operator called the Hamiltonian. The Hamiltonian is a self-adjoint operator and in many cases corresponds to the energy observable. In the case where the Hamiltonian is time-independent, this differential equation can be solved directly to obtain the equation

\displaystyle |\psi(t)\rangle=e^{-\frac{i}{\hbar}Ht}|\psi(0)\rangle.

Since H is a linear operator, e^{-\frac{i}{\hbar}Ht} is also a linear operator (actually a unitary operator) and is the explicit form of the time evolution operator. For a Hamiltonian with time-dependence, one must use other methods to obtain the time evolution operator, such as making use of the so-called interaction picture or Dirac picture. But in any case, it is the Schrodinger equation, and the time evolution operator we can obtain from it, that provides us with the answer to the “guiding question” we asked above.

References:

Wave Function on Wikipedia

Matter Wave on Wikipedia

Uncertainty Principle on Wikipedia

Self-Adjoint Operator on Wikipedia

Unitary Operator on Wikipedia

Noether’s Theorem on Wikipedia

Schrodinger Equation on Wikipedia

Introduction to Quantum Mechanics by David J. Griffiths

Modern Quantum Mechanics by Jun John Sakurai

Quantum Mechanics by Eugen Merzbacher

Some Basics of Quantum Mechanics

In My Favorite Equation in Physics we discussed a little bit of classical mechanics, the prevailing view of physics from the time of Galileo Galilei up to the start of the 20th century. Keeping in mind the ideas we introduced in that post, we now move on to one of the most groundbreaking ideas in the history of physics since that time (along with Einstein’s theory of relativity, which we have also discussed a little bit of in From Pythagoras to Einstein), the theory of quantum mechanics (also known as quantum physics).

We recall one of the “guiding” questions of physics that we mentioned in My Favorite Equation in Physics:

“Given the state of a system at a particular time, in what state will it be at some other time?”

This emphasizes the importance of the concept of “states” in physics. We recall that the state of a system (for simplicity, we consider a system made up of only one object whose internal structure we ignore – it may be a stone, a wooden block, a planet – but we may refer to this object as a “particle”) in classical mechanics is given by its position and velocity (or alternatively its position and momentum).

A system consisting of a single particle, whose state is specified by its position and velocity, or its position and momentum, might just be the simplest system that we can study in classical mechanics. But in this post, discussing quantum mechanics, we will start with something even simpler.

Consider a light switch. It can be in a “state” of “on” or “off”. Or perhaps we might consider a coin. This coin can be in a “state” of “heads” or “tails”. We consider a similar system for reasons of simplicity. In real life, there also exist such systems with two states, and they are being studied, for example, in cutting-edge research on quantum computing. In the context of quantum mechanics, such systems are called “qubits“, which is short for “quantum bits”.

Now an ordinary light switch may only be in a state of “on” or “off”, and an ordinary coin may be in a state of “heads” or “tails”, but we cannot have a state that is some sort of “combination” of these states. It would be unthinkable in our daily life. But a quantum mechanical system which can be in any of two states may also be in some combination of these states! This is the idea at the very heart of quantum mechanics, and it is called the principle of quantum superposition. Its basic statement can be expressed as follows:

If a system can exist in any number of classical states, it can also exist in any linear combination of these states.

This means that the space of states of a quantum mechanical system form a vector space. The concept of vector spaces, and the branch of mathematics that studies it, called linear algebra, can be found in Vector Spaces, Modules, and Linear Algebra. Linear algebra (and its infinite-dimensional variant called functional analysis) is the language of quantum mechanics. We have to mention that the field of “scalars” of this vector space is the set of complex numbers \mathbb{C}.

There is one more mathematical procedure that we have to apply to these states, called “normalization“, which we will learn about later on in this post. First we have to explain what it means if we have a state that is in a “linear combination” of other states.

We write our quantum state in the so-called “Dirac notation”. Consider the “quantum light switch” we described earlier (in real life, we would have something like an electron in “spin up” or “spin down” states, or perhaps a photon in “horizontally polarized” or “vertically polarized” states). We write the “on” state as

|\text{on}\rangle

and the “off” state as

|\text{off}\rangle

The principle of quantum superposition states that we may have a state such as

|\text{on}\rangle+|\text{off}\rangle

This state is made up of equal parts “on” and “off”. Quantum-mechanically, such a state may exist, but when we, classical beings as we are (in the sense that we are very big) interact or make measurements of this system, we only find it in either in a state of “on” or “off”, and never in a state that is in a linear combination of both. What then, does it mean for it to be in a state that is a linear combination of both “on” and “off”, if we can never even find it in such a state?

If a system is in the state |\text{on}\rangle+|\text{off}\rangle before we make our measurement, then there are equal chances that we will find it in the “on” state or in the “off” state after measurement. We do not know beforehand whether we will get an “on” state or “off” state, which implies that there is a certain kind of “randomness” involved in quantum-mechanical systems.

It is at this point that we reflect on the nature of randomness. Let us consider a process we would ordinarily suppose to be “random”, for example the flipping of a coin, or the throwing of a die. We consider these processes random because we do not know all the factors at play, but if we had all the information, such as the force of the flip or the throw, the air resistance and its effects, and so on, and we make all the necessary calculations, at least “theoretically” we would be able to predict the result. Such a process is not really random; we only consider it random because we lack a certain knowledge that if we only possessed, we could use in order to determine the result with absolute certainty.

The “randomness” in quantum mechanics involves no such knowledge; we could know everything that is possible for us to know about the system, and yet, we could never predict with absolute certainty whether we would get an “on” or an “off” state when we make a measurement on the system. We might perhaps say that this randomness is “truly random”. All we can conclude, from our knowledge that the state of the system before measurement is |\text{on}\rangle+|\text{off}\rangle, is that there are equal chances of finding it in the “on” or “off” state after measurement.

If the state of the system before measurement is |\text{on}\rangle, then after measurement it will also be in the state |\text{on}\rangle. If we had some state like 1000|\text{on}\rangle+5|\text{off}\rangle before measurement, then there will be a much greater chance that it will be in the state |\text{on}\rangle after measurement, although there is still a small chance that it will be in the state |\text{off}\rangle.

We now introduce the concept of normalization. We have seen that the “coefficients” of the components of our “state vector” correspond to certain probabilities, although we have not been very explicit as to how these coefficients are related to the probabilities. We have a well-developed mathematical language to deal with probabilities. When an event is absolutely certain to occur, for instance, we say that the event has a probability of 100%, or that it has a probability of 1. We want to use this language in our study of quantum mechanics.

We discussed in Matrices the concept of a linear functional, which assigns a real or complex number (or more generally an element of the field of scalars) to any vector. For vectors expressed as column matrices, the linear functionals were expressed as row matrices. In Dirac notation, we also call our “state vectors”, such as |\text{on}\rangle and |\text{off}\rangle, “kets”, and we will have special linear functionals \langle \text{on}| and \langle \text{off}|, called “bras” (the words “bra” and “ket” come from the word “bracket”, with the fate of the letter “c” unknown; this notation was developed by the physicist Paul Dirac, who made great contributions to the development of quantum mechanics).

The linear functional \langle \text{on}| assigns to any “state vector” representing the state of the system before measurement a certain number which when squared (or rather “absolute squared” for complex numbers) gives the probability that it will be found in the state |\text{on}\rangle after measurement. We have said earlier that if the system is known to be in the state |\text{on}\rangle before the measurement, then after the measurement the system will also be in the state |\text{on}\rangle. In other words, given that the system is in the state |\text{on}\rangle before measurement, the probability of finding it in the state |\text{on}\rangle after measurement is equal to 1. We express this explicitly as

|\langle \text{on}|\text{on}\rangle|^{2}=1

From this observation, we make the requirement that

\langle \psi |\psi \rangle=1

for any state |\psi\rangle. This will lead to the requirement that if we have the state C_{1}|\text{on}\rangle+C_{2}|\text{off}\rangle, the coefficients C_{1} and C_{2} must satisfy the equation

|C_{1}|^{2}+|C_{2}|^{2}=1

or

C_{1}^{*}C_{1}+C_{2}^{*}C_{2}=1

The second expression is to remind the reader that these coefficients are complex. Since we express probabilities as real numbers, it is necessary that we use the “absolute square” of these coefficients, given by multiplying each coefficient by its complex conjugate.

So, in order to express the state where there are equal chances of finding the system in the state |\text{on}\rangle or in the state |\text{off}\rangle after measurement, we do not write it anymore as |\text{on}\rangle+|\text{off}\rangle, but instead as

\frac{1}{\sqrt{2}}|\text{on}\rangle+\frac{1}{\sqrt{2}}|\text{off}\rangle

The factors of \frac{1}{\sqrt{2}} are there to make our notation agree with the notation in use in the mathematical theory of probabilities, where an event which is certain has a probability of 1. They are called “normalizing factors”, and this process is what is known as normalization.

We may ask, therefore, what is the probability of finding our system in the state |\text{on}\rangle after measurement, given that before the measurement it was in the state \frac{1}{\sqrt{2}}|\text{on}\rangle+\frac{1}{\sqrt{2}}|\text{off}\rangle. We already know the answer; since there are equal chances of finding it in the state |\text{on}\rangle or |\text{off}\rangle, then we should have a 50% probability of finding it in the state |\text{on}\rangle after measurement, or that this result has probability 0.5. Nevertheless, we show how we use Dirac notation and normalization to compute this probability:

|\langle \text{on}|(\frac{1}{\sqrt{2}}|\text{on}\rangle+\frac{1}{\sqrt{2}}|\text{off}\rangle)|^{2}

|\langle \text{on}|\frac{1}{\sqrt{2}}|\text{on}\rangle+\langle \text{on}|\frac{1}{\sqrt{2}}|\text{off}\rangle)|^{2}

|\frac{1}{\sqrt{2}}\langle \text{on}|\text{on}\rangle+\frac{1}{\sqrt{2}}\langle \text{on}|\text{off}\rangle)|^{2}

We know that \langle \text{on}|\text{on}\rangle=1, and that \langle \text{on}|\text{off}\rangle=0, which leads to

|\frac{1}{\sqrt{2}}|^{2}

\frac{1}{2}

as we expected. We have used the “linear” property of the linear functionals here, emphasizing once again how important the language of linear algebra is to describing quantum mechanics.

For now that’s about it for this post. We have glossed over many aspects of quantum mechanics in favor of introducing and emphasizing how linear algebra is used as the foundation for its language; and the reason why linear algebra is chosen is because it fits with the principle at the very heart of quantum mechanics, the principle of quantum superposition.

So much of the notorious “weirdness” of quantum mechanics comes from the principle of quantum superposition, and this “weirdness” has found many applications both in explaining why our world is the way it is, and also in improving our quality of life through technological inventions such as semiconductor electronics.

I’ll make an important clarification at this point; we do not really “measure” the state of the system. What we really measure are “observables” which tell us something about the state of the system. These observables are represented by linear transformations, but to understand them better we need the concept of eigenvectors and eigenvalues, which I have not yet discussed in this blog, and did not want to discuss too much in this particular post. In the future perhaps we will discuss it; for now the reader is directed to the references listed at the end of the post. What we have discussed here, the probability of finding the system in a certain state after measurement given that it is in some other state before measurement, is related to the phenomenon known as “collapse“.

Also, despite the fact that we have only tackled two-state (or qubit) systems in this post, it is not too difficult to generalize, at least conceptually, to systems with more states, or even systems with an infinite number of states. The case where the states are given by the position of a particle leads to the famous wave-particle duality. The reader is encouraged once again to read about it in the references below, and at the same time try to think about how one should generalize what we have discussed in here to that case. Such cases will hopefully be tackled in future posts.

(Side remark: I had originally intended to cover quite a bit of ground in at least the basics of quantum mechanics in this post; but before I noticed it had already become quite a hefty post. I have not even gotten to the Schrodinger equation. Well, hopefully I can make more posts on this subject in the future. There’s so much one can make a post about when it comes to quantum mechanics.)

References:

Quantum Mechanics on Wikipedia

Quantum Superposition on Wikipedia

Bra-Ket Notation on Wikipedia

Wave Function Collapse on Wikipedia

Parallel Universes #1 – Basic Copenhagen Quantum Mechanics at Passion for STEM

If You’re Losing Your Grip on Quantum Mechanics, You May Want to Read Up on This at quant-ph/math-ph

The Feynman Lectures on Physics by Richard P. Feynman

Introduction to Quantum Mechanics by David J. Griffiths

Modern Quantum Mechanics by Jun John Sakurai