# Differentiable Manifolds Revisited

In many posts on this blog, such as Geometry on Curved Spaces and Connection and Curvature in Riemannian Geometry, we have discussed the subject of differential geometry, usually in the context of physics. We have discussed what is probably its most famous application to date, as the mathematical framework of general relativity, which in turn is the foundation of modern day astrophysics. We have also seen its other applications to gauge theory in particle physics, and in describing the phase space, whose points corresponds to the “states” (described by the position and momentum of particles) of a physical system in the Hamiltonian formulation of classical mechanics.

In this post, similar to what we have done in Varieties and Schemes Revisited for the subject of algebraic geometry, we take on the objects of study of differential geometry in more technical terms. These objects correspond to our everyday intuition, but we must develop some technical language in order to treat them “rigorously”, and also to be able to generalize them into other interesting objects. As we give the technical definitions, we will also discuss the intuitive inspiration for these definitions.

Just as varieties and schemes are the main objects of study in algebraic geometry (that is until the ideas discussed in Grothendieck’s Relative Point of View were formulated), in differential geometry the main objects of study are the differentiable manifolds. Before we give the technical definition, we first discuss the intuitive idea of a manifold.

A manifold is some kind of space that “locally” looks like Euclidean space $\mathbb{R}^{n}$. $1$-dimensional Euclidean space is just the line $\mathbb{R}$, $2$-dimensional Euclidean space is the plane $\mathbb{R}^{2}$, and so on. Obviously, Euclidean space itself is a manifold, but we want to look at more interesting examples, i.e. spaces that “locally” look like Euclidean space but “globally” are very different from it.

As an example, consider the surface of the Earth. “Locally”, that is, on small regions, the surface of the Earth appears flat. However, “globally”, we know that it is actually round.

Another way to think about things is that any small region on the surface of the Earth can be put on a flat map (possibly with some distortion of distances). However, there is no flat map that can include every point on the surface of the Earth while continuing to make sense. The best we can do is use several maps with some overlaps between them, transitioning between different maps when we change the regions we are looking at. We want these overlaps and transitions to make sense in some way.

In differential geometry, what we want is to be able to do calculus on these more general manifolds the way we can do calculus on the line, on the plane, and so on. In order to do this, we require that the “transitions” alluded to in the previous paragraph are given by differentiable functions.

Summarizing the above discussion in technical terms, an $n$-dimensional differentiable manifold is a topological space $X$ with homeomorphisms $\varphi_{\alpha}$ from the open subsets $U_{\alpha}$ covering $X$ to $\mathbb{R}^{n}$, such that the composition $\varphi_{\alpha}\circ\varphi_{\beta}^{-1}$ is a differentiable function on $\varphi_{\beta}(U_{\alpha}\cap U_{\beta})\subset\mathbb{R}^{n}$.

Following the analogy with maps we discussed earlier, the pair $\{U_{\alpha}, \varphi_{\alpha}\}$ is called a chart, and the collection of all these charts that cover the manifold is called an atlas. The map $\varphi_{\alpha}\circ\varphi_{\beta}^{-1}|_{\varphi_{\beta}(U_{\alpha}\cap U_{\beta})}$ is called a transition map.

Now that we have defined what a manifold technically is, we discuss some related concepts, in particular the objects that “live” on our manifold. Perhaps the most basic of these objects are the functions on the manifold; however, we won’t discuss the functions themselves too much since there are not that many new concepts regarding them.

Instead, we will use one of the most useful concepts when it comes to discussing objects that “live” on manifolds – fiber bundles (see Vector Fields, Vector Bundles, and Fiber Bundles). A fiber bundle is given by a topological space $E$ with a projection $\pi$ from $E$ to a base space $B$, with the requirement that the space $\pi^{-1}(U)$ is homeomorphic to the product space $U\times F$, where $F$ is the fiber, defined as $\pi^{-1}(x)$ for any point $x$ of $B$. When the fiber $F$ is also a vector space, we refer to $E$ as a vector bundle. In differential geometry, we require that the relevant maps be also diffeomorphic, i.e. differentiable and bijective.

One of the most important kinds of vector bundles in differential geometry are the tangent bundles, which can be thought of as the collection of all the tangent spaces of a manifold at every point, for all the points of the manifold. We have already made use of these concepts in Geometry on Curved Spaces, and Connection and Curvature in Riemannian Geometry. We needed it, for example, to discuss the notion of parallel transport and the covariant derivative in Riemannian geometry. We will now discuss these concepts more technically.

Let $\mathcal{O}_{p}$ be the ring of real-valued differentiable functions defined in a neighborhood of a point $p$ in a differentiable manifold $X$. We define the real tangent space at $p$, written $T_{\mathbb{R},p}(X)$, to be the vector space of $p$-centered $\mathbb{R}$-linear derivations, which are $\mathbb{R}$-linear maps $D: \mathcal{O}_{p}\rightarrow\mathbb{R}$ satisfying Leibniz’s rule $D(fg)=f(p)Dg-g(p)Df$. Any such derivation $D$ can be written in the following form:

$\displaystyle D=\sum_{i}a_{i}\frac{\partial}{\partial x_{i}}\bigg\rvert_{p}$

This means that $\frac{\partial}{\partial x_{i}}$ is a basis for the real tangent space at $p$. It might be a little jarring to see “differential operators” serving as a basis for a vector space, but it might perhaps be helpful to think of tangent vectors as giving “how fast” functions on the manifold are changing at a certain point. See the following picture:

The manifold is $M$, and its tangent space at the point $x$ is $T_{x}M$. One of the tangent vectors, $v$, is shown. The parametrized curve $\gamma(t)$ is often used to define the tangent vector, although that is not the approach we have given here (it may be found in the references, and is closely related to the definition we have given).

Another concept that we will need is the concept of $1$-forms. A $1$-form on a particular point on the manifold takes a single tangent vector (an element of the tangent space at that particular point) as an input and gives a number as an output. Just as we have the notion of tangent vectors, tangent spaces, and tangent bundles, we also have the “dual” notion of $1$-forms, cotangent spaces, and cotangent bundles, and just as the basis of the tangent vectors are given by $\frac{\partial}{\partial x_{i}}$, we also have a basis of $1$-forms given by $dx_{i}$.

Aside from $1$-forms, we also have mathematical objects that take two elements of the tangent space at a point (i.e. two tangent vectors at that point) as an input and gives a number as an output.

An example that we have already discussed in this blog is the metric tensor, which we refer to sometimes as simply the metric (calling it the metric tensor, however, helps prevent confusion as there are many different concepts in mathematics also referred to as a metric). We have been thinking of the metric tensor as expressing the “infinitesimal distance formula” at a certain point on the manifold.

The metric tensor is defined as a symmetric, nondegenerate, bilinear form. “Symmetric” means that we can interchange the two inputs (the tangent vectors) and get the same output. “Nondegenerate” means that, holding one of the inputs fixed and letting the other vary, having an output of zero for all the varying inputs means that the fixed input must be zero. “Bilinear form” means that it is linear in either input – it respects addition of vectors and multiplication by scalars. If we hold one input fixed, it is then a linear transformation of the other input.

In the case of our previous discussions on Riemannian geometry, the output of the metric tensor is a positive real number, expressing the infinitesimal distance. Hence, a metric tensor on a differentiable manifold which always gives a positive real number as an output is called a Riemannian metric. A manifold with a Riemannian metric is of course called a Riemannian manifold.

In general relativity, the spacetime interval, unlike the distance, may not necessarily be positive. More technically, spacetime in general relativity is an example of a pseudo-Riemannian (or semi-Riemannian) manifold, which do not require the metric to be positive (more specifically it is a Lorentzian manifold – we will leave the details of these definitions to the references for now). As we have seen though, many concepts from the study of Riemannian manifolds carry over to the pseudo-Riemannian case.

Another example of these kinds of objects are the differential forms (see Differential Forms). One important example of these objects is the symplectic form in symplectic geometry (see An Intuitive Introduction to String Theory and (Homological) Mirror Symmetry), which is used as the mathematical framework of the Hamiltonian formulation of classical mechanics. Just as the metric tensor is related to the “infinitesimal distance”, the symplectic form is related to the “infinitesimal area”.

As an example of the symplectic form, the “phase space” in the Hamiltonian formulation of classical mechanics is made up of points which correspond to a “state” of a system as given by the position and momentum of its particles. For the simple case of one particle constrained to move in a line, the symplectic form (written $\omega$) is given by

$\displaystyle \omega=\displaystyle dq\wedge dp$

where $q$ is the position and $p$ is the momentum, serving as the coordinates of the phase space (by the way, the phase space is itself already the cotangent bundle of the configuration space, the space whose points are the different “configurations” of the system, which we can think of as a generalization of the concept of position).

Technically, the symplectic form is defined as a closed, nondegenerate, $2$-form. By “$2$-form“, we mean that it is a differential form, obeying the properties we gave in Differential Forms, such as antisymmetry. The notion of a differential being “closed“, also already discussed in the same blog post, means that its exterior derivative is zero. “Nondegenerate” of course was already defined in the preceding paragraphs. The symplectic form is also a bilinear form, although this is a property of all $2$-forms, considered as functions of two tangent vectors at some point on the manifold. More generally, all differential forms are examples of multilinear forms. A manifold with a symplectic form is called a symplectic manifold.

There is still so much more to differential geometry, but for now, we have at least accomplished the task of defining some of its most basic concepts in a more technical manner. The language we have discussed here is important to deeper discussions of differential geometry.

References:

Differential Geometry on Wikipedia

Differentiable Manifold on Wikipedia

Tangent Space on Wikipedia

Tangent Bundle on Wikipedia

Cotangent Space on Wikipedia

Cotangent Bundle on Wikipedia

Riemannian Manifold on Wikipedia

Pseudo-Riemannian Manifold on Wikipedia

Symplectic Manifold on Wikipedia

Differential Geometry of Curves and Surfaces by Manfredo P. do Carmo

Differential Geometry: Bundles, Connections, Metrics and Curvature by Clifford Henry Taubes

Foundations of Differential Geometry by Shoshichi Kobayashi and Katsumi Nomizu

Geometry, Topology, and Physics by Mikio Nakahara

# Grothendieck’s Relative Point of View

In Varieties and Schemes Revisited we defined the notion of schemes, which is a far-reaching generalization inspired by the concept of varieties, which is essentially a kind of “shape” defined by polynomials in some way. However, the definition of schemes were but one of many innovations in algebraic geometry developed by the mathematician Alexander Grothendieck. In this post, we discuss another of these innovations, the so-called “relative point of view“, in which the focus is not just on schemes in isolation, but schemes relative to (with a morphism to) some “base scheme”.

Let $S$ be a scheme. A scheme over $S$, or an $S$-scheme, is a scheme $X$ with a morphism $f:X\rightarrow S$ called the structural morphism. If $Y$ is another $S$-scheme with structural morphism $g:Y\rightarrow S$, a morphism of $S$-schemes is a morphism $u:X\rightarrow Y$ such that $f=g\circ u$.

If the scheme $S$ is the spectrum of some ring $R$, we may also refer to $X$ above as a scheme over $R$. Every ring has a morphism from the ring of ordinary integers $\mathbb{Z}$, and every scheme therefore has a morphism to the scheme $\text{Spec}(\mathbb{Z})$, so we may think of all schemes as schemes over $\mathbb{Z}$.

Given two schemes $X$ and $Y$ over a third scheme $S$, we define the fiber product $X\times_{S}Y$ to be a scheme together with projection morphisms $\pi_{X}:X\times_{S}Y\rightarrow X$ and $\pi_{Y}:X\times_{S}Y\rightarrow Y$ such that $f\circ\pi_{X}=g\circ\pi_{Y}$, and such that for any other scheme $Z$ and morphisms $p:Z\rightarrow X$ and $q:Z\rightarrow Y$, there is a unique morphism $Z\rightarrow X\times_{S}Y$ up to isomorphism (the concept of fiber product is part of category theory – see also More Category Theory: The Grothendieck Topos).

We can use the fiber product to introduce the concept of base change. Given a scheme $X$ over a scheme $S$, and a morphism $S'\rightarrow S$, the fiber product $X\times_{S}S'$ is a scheme over $S'$. We may think of it as being “induced” by the morphism $S'\rightarrow S$. One of the things that can be done with this idea of base change is to look at the properties of $X\times_{S}S'$ and see if we can use these to learn about the properties of $X$, which may be useful if the properties of $X$ are difficult to determine directly compared to the properties of $X\times_{S}S'$ (in essence we want to be able to attack a difficult problem indirectly by first attacking an easier problem related to it, which is a common strategy in mathematics).

A special case of base change is when $S'$ is given by the spectrum of the residue field (see Localization) $k$ corresponding to a point $P$ of $S$. There is a morphism of schemes $\text{Spec}(k)\rightarrow S$ which we may think of as the inclusion of the point $P$ into the scheme $X$. Then the fiber product $X\times_{S}\text{Spec}(k)$ is called the fiber of $X$ at the point $P$. The terminology is perhaps reminiscent of fiber bundles (see Vector Fields, Vector Bundles, and Fiber Bundles), and is also rather similar to the concept of covering spaces (see Covering Spaces) in that we have some kind of space “over” every point of our “base” scheme. However, unlike those two earlier concepts, the spaces which make up our fibers may now vary as the points vary.

Actually, the concept that this special case of fiber product and base change should bring to mind is that of a moduli space (see The Moduli Space of Elliptic Curves), where every point represents a space, and the spaces vary as the points vary. Or, as we worded it in The Moduli Space of Elliptic Curves, every point of the moduli space (given by the base scheme) corresponds to a space (given by the fiber), and the moduli space tells us how these spaces vary, so that spaces which are similar to each other in some way correspond to points in the moduli space that are close together.

The lecture notes of Andreas Gathmann listed among the references below contain some nice diagrams to help visualize the idea of the fiber product and base change (these can be found in chapter 5 of the 2002 version). To see these ideas in action, one can look at the article Arithmetic on Curves by Barry Mazur (also among the references) which discusses, among other things, the approach taken by Gerd Faltings in proving the famous conjecture of Louis J. Mordell which says that there is a finite number of rational points on a curve of genus greater than $1$.

References:

Grothendieck’s Relative Point of View on Wikipedia

Arithmetic on Curves by Barry Mazur

Algebraic Geometry by Andreas Gathmann

The Rising Sea: Foundations of Algebraic Geometry by Ravi Vakil

Algebraic Geometry by Robin Hartshorne

# Varieties and Schemes Revisited

In Basics of Algebraic Geometry we introduced the idea of varieties and schemes as being kinds of “shapes” defined by polynomials (or rings, more generally) in some way. In this post we discuss the definitions of these concepts in more technical detail, and introduce other important concepts related to algebraic geometry as well.

##### I. Preliminaries: Affine Space, Algebraic Sets and Ringed Spaces

Affine $n$-space, written $\mathbb{A}^{n}$, is the set of all $n$-tuples of elements of a field $k$, i.e.

$\displaystyle \mathbb{A}^{n}=\{(a_{1},...,a_{n})|a_{i}\in k \text{ for }1\leq i\leq n\}$.

An algebraic set is a subset of $\mathbb{A}^{n}$ that is the zero set $Z(T)$ of some set $T$ of polynomials, i.e. $Y=Z(T)$, where

$\displaystyle Z(T)=\{P\in \mathbb{A}^{n}|f(P)=0 \text{ for all } f\in T\}$.

Intuitively, we want to define a “variety” as some kind of space which “locally” looks like an irreducible algebraic set. “Irreducible” means it cannot be expressed as the union of other algebraic sets. However, we want to think of a variety as more than just a space; we want to think of it as a space with things (namely functions) “living on it”. This leads us to the notion of a ringed space.

A ringed space is simply a pair $(X,\mathcal{O}_{X})$, where $X$ is a topological space and $\mathcal{O}_{X}$ is a sheaf (see  Sheaves) of rings on $X$. A morphism of ringed spaces from $(X,O_{X})$ to $(Y,O_{Y})$ is given by a continuous map $f: X\rightarrow Y$ and a morphism of sheaves of rings $f^{\#}: \mathcal{O}_{Y}\rightarrow f_{*}\mathcal{O}_{X}$.

Recall that a morphism of sheaves of rings $\varphi:\mathcal{F}\rightarrow \mathcal{G}$ for sheaves of rings $\mathcal{F}$ and $\mathcal{G}$ on X is given by a morphism of rings $\varphi(U): \mathcal{F}(U)\rightarrow \mathcal{G}(U)$ for every open set $U$ of $X$ such that for $V\subseteq{U}$ we have $\rho_{U,V}\circ\varphi(U)=\varphi(V)\circ\rho'_{U,V}$, where $\rho_{U,V}$ and $\rho'_{U,V}$ are the restriction maps of $\mathcal{F}$ and $\mathcal{G}$.

We might as well mention locally ringed spaces here, since they will be used to define the concept of schemes later on:

A locally ringed space is a ringed space $(X,\mathcal{O}_{X})$ such that for each point $P$ of $X$, the stalk $\mathcal{O}_{X,P}$ is a local ring (see Localization). A morphism of locally ringed spaces from $(X,O_{X})$ to $(Y,O_{Y})$ is given by a continuous map $f: X\rightarrow Y$ and a morphism of sheaves of rings $f^{\#}: \mathcal{O}_{Y}\rightarrow f_{*}\mathcal{O}_{X}$ such that $(f_{P}^{\#})^{-1}(\mathfrak{m}_{X,P})=\mathfrak{m}_{Y,f(P)}$ for all $P$ where $f_{P}^{\#}: \mathcal{O}_{Y,f(P)}\rightarrow \mathcal{O}_{X,P}$ is the map induced on the stalk at $P$.

##### II. Varieties in Three Steps:  Affine Varieties, Prevarieties, and Varieties

We now set out to accomplish our goal of defining “varieties” as spaces that locally look like irreducible algebraic sets. We first start with a ringed space that just “looks like” an irreducible algebraic set:

An affine variety is a ringed space $(X,\mathcal{O}_{X})$ such that $X$ is irreducible, $O_{X}$ is a sheaf of $k$-valued functions, and $X$ is isomorphic to an irreducible algebraic set in $\mathbb{A}^{n}$.

Next, we define a more general kind of ringed space, that is required to look like an irreducible algebraic set only “locally”:

A prevariety is a ringed space $(X,\mathcal{O}_{X})$ such that $X$ is irreducible, $O_{X}$ is a sheaf of $k$-valued functions, and there is a finite open cover $U_{i}$ such that $(U_{i},\mathcal{O}_{X}|_{U_{i}})$ is an affine variety for all $i$.

We are almost done. However, there is one more nice property that we would like our varieties to have. A topological space $X$ is said to have the Hausdorff property if two distinct points always have two disjoint neighborhoods. With the Zariski topology this is almost always impossible; however there is an analogous notion which is satisfied if the image of the “diagonal morphism” which sends the point $P$ in $X$ to the point $(P,P)$ in $X\times X$ is closed in $X\times X$. There is an analogous notion of “product” in algebraic geometry; therefore, we can define the concept of variety as follows:

A variety is a prevariety $X$ such that the diagonal morphism is closed in $X\times X$. In the rest of this post, we will refer to this property as the “algebro-geometric” analogue of the Hausdorff property.

##### III. Schemes

We now define the concept of schemes, which, as we shall show in the next section, generalize the concept of varieties, i.e. varieties are just a special case of schemes. Inspired by the correspondence between the maximal ideals of the “ring of polynomial functions” (with coefficients in an “algebraically closed field” like the complex numbers) of an algebraic set and the points of the algebraic set mentioned in Basics of Algebraic Geometry, we go further and consider a ringed space whose underlying topological space has points corresponding to the prime ideals of a ring (which is not necessarily a ring of polynomials – we might even consider, for example, the ring of ordinary integers $\mathbb{Z}$, or the ring of integers of an algebraic number field –  see Algebraic Numbers).

The spectrum (note that the word “spectrum” has many different meanings in mathematics, and this particular usage is different, say, from that in Eilenberg-MacLane Spaces, Spectra, and Generalized Cohomology Theories) of a ring is a  locally ringed space $(\text{Spec}(A)),\mathcal{O}$, where $\text{Spec}(A)$ is the set of prime ideals of $A$ equipped with the Zariski topology, and $\mathcal O$ is a sheaf on $\text{Spec}(A)$ given by defining $\mathcal{O}(U)$ to be the set of functions $s:U\rightarrow \coprod_{\mathfrak{p}\in U}A_{\mathfrak{p}}$, such that $s(\mathfrak{p})\in A_\mathfrak{p}$ for each $\mathfrak{p}\in U$, and such that for each $\mathfrak{p}\in U$, there is an open set $V\subseteq U$ containing $\mathfrak{p}$ and elements $a,f\in A$ such that for each $\mathfrak{q}\in V$, $f\notin \mathfrak{q}$, and $s(\mathfrak{q})=a/f$ in $A_{\mathfrak{q}}$.

We now proceed to define schemes, closely mirroring how we defined varieties earlier:

An affine scheme is a locally ringed space $(X,\mathcal{O}_{X})$ that is isomorphic as a locally ringed space to the spectrum of some ring.

A scheme is a locally ringed space $(X,\mathcal{O}_{X})$ where every point is contained in some open set $U$ such that $U$ considered as a topological space, together with the restricted sheaf $\mathcal{O}_{X}|_{U}$, is an affine scheme. A morphism of schemes is a morphism as locally ringed spaces.

Finally, to complete the analogy with varieties, we refer to schemes which have the (analogue of the) Hausdorff property as separated schemes.

Note: In some of the (mostly older) literature, what we refer to as schemes in this post are instead referred to as preschemes, in analogy with prevarieties. What they call a scheme is what we refer to as a separated scheme, i.e. a scheme possessing the Hausdorff property. I have no idea at the moment as to why this rather nice terminology was changed, but in this post we stick with the modern convention.

##### IV. Prevarieties and Varieties as Special Kinds of Schemes

We now discuss varieties as special cases of schemes. First we need to define what properties we would like our schemes to have, in order to fit with how we described varieties earlier (as ringed spaces which locally look like irreducible spaces defined by polynomials). Therefore, we have to mimic certain properties of polynomial rings.

We first note that polynomials over a field are finitely generated algebras over some field $k$. A scheme is said to be of finite type over the field $k$ if the affine open sets are each isomorphic to the spectrum of some ring which is a finitely generated algebra over $k$. More generally, given a morphism of schemes $X\rightarrow Y$, there is a concept of $X$ being a scheme of finite type over $Y$, but we will leave this to the references for now.

Next we note that polynomials over a field are integral domains. This means that whenever there are two polynomials $f$ and $g$ with the property that $fg=0$, then either $f=0$ or $g=0$. A scheme is integral if each the affine open sets are each isomorphic to the spectrum of some ring which is an integral domain. An equivalent condition is for the scheme to be irreducible and reduced (this means that the ring specified above has no nilpotent elements, i.e. elements where some power is equal to zero).

We therefore redefine a prevariety as an integral scheme of finite type over the field $k$. As with the earlier definition, a variety is a prevariety with the (analogue of the) Hausdorff property (i.e. an integral separated scheme of finite type over $k$).

##### V. Conclusion

In conclusion, we have started with essentially the same ideas as the “analytic geometry” of Pierre de Fermat and Rene Descartes, familiar to high school students everywhere, used to describe shapes such as lines, circles, conics (parabolas, hyperbolas, circles, and ellipses), and so on. From there we generalized to get more shapes, which resemble only these old shapes “locally” (we may also think of these new shapes as being “glued” from the old ones). To maintain certain familiar properties expected of shapes, we impose the analogue of the Hausdorff property. We then obtain the concept of a variety.

But we can generalize much, much farther to more than just polynomial rings. We can define “spaces” which come from rings which need not be polynomial rings, such as the ring of ordinary integers $\mathbb{Z}$ (or more generally algebraic integers – we have actually hinted at these applications of algebraic geometry in Divisors and the Picard Group). We can then have a kind of “geometry” of these rings, which gives us methods analogous to the powerful methods of geometry, which can be applied to branches of mathematics we would not usually think of as being “geometric”, such as number theory, as we have mentioned above. We end this post with quotes from two of the pioneers of modern mathematics (these quotes are also found in the book Algebra by Michael Artin):

“To me algebraic geometry is algebra with a kick.”

-Solomon Lefschetz

“In helping geometry, modern algebra is helping itself above all.”

-Oscar Zariski

References:

Algebraic Variety on Wikipedia

Scheme on Wikipedia

Ringed Space on Wikipedia

Abstract Varieties on Rigorous Trivialities

Schemes on Rigorous Trivialities

Algebraic Geometry by Andreas Gathmann

The Rising Sea: Foundations of Algebraic Geometry by Ravi Vakil

Algebraic Geometry by Robin Hartshorne

Algebra by Michael Artin

# Covering Spaces

In Homotopy Theory we defined the fundamental group of a topological space as the group of equivalence classes of “loops” on the space. In this post, we discuss the fundamental group from another point of view, this time making use of the concept of covering spaces. In doing so, we will uncover some interesting analogies with the theory of Galois groups (see Galois Groups). Galois groups are usually associated with number theory, and not usually thought of as being related to algebraic topology, therefore one might find these analogies to be quite surprising and unexpected.

We will start with an example, which we are already somewhat familiar with, the circle. For simplicity, we set the circle to have a circumference equal to $1$. We also consider the real line, which we will think of as being “wrapped” over the circle, like a spring. We may think of this “spring” as casting a “shadow”, which is the circle. See also the following image by user Yonatan of Wikipedia:

Looking at the diagram, we see that we can map the line to the circle by a kind of “projection”. As we move around the line, we “project” to different points on the circle. However, if we move by any distance equal to an integer multiple of the circumference of the circle (which as we said above we have set equal to $1$), we come back to the same point if we project to the circle. At this point we recall that the fundamental group of the circle is the group of integers under addition. We can think of an element of this group (an ordinary integer) as giving the “winding number” of a loop on the circle.

In this example, we refer to the line as a covering space of the circle. Since the line is simply connected (see Homotopy Theory), it is also the universal covering space of the circle. The mapping of one point to another point on the line, such that they both “project” to the same point on the circle, is called a deck transformation. The deck transformations of a covering space form a group, and as hinted at in the discussion in the preceding paragraph, the group of deck transformations of the universal covering space of some topological space $X$ is exactly the fundamental group of $X$.

More generally, a covering space for a topological space $X$ is another topological space $\tilde{X}$ with a continuous surjective map $p: \tilde{X}\rightarrow X$ such that the “inverse image” of a small neighborhood in $X$ is a disjoint union of small neighborhoods of $\tilde{X}$. In the diagram above, the inverse image of the small neighborhood of $U$ of $X$ is the disjoint union of the small neighborhoods $S_{1}, S_{2}, S_{3}...$ of $\tilde{X}$.

There are many possible covering spaces for a topological space. Here is another example for the circle (courtesy of user Pappus of Wikipedia):

We can think of this as a circle “covering” another circle. However, the first example above, the line covering the circle, is special. It is a universal covering space, which means that it is a covering space which is simply connected. The word “universal” however, means that this particular covering space also “covers” all the others.

Another example is the torus. Its universal covering space is the plane, and as we recall from The Moduli Space of Elliptic Curves, we can think of the torus as being obtained from the plane by dividing it into parallelograms using a lattice (which is also a group), and then identifying opposite edges of the parallelogram. Hence we can think of the torus as a quotient space (see Modular Arithmetic and Quotient Sets) obtained from the plane. The case of the circle and the line, which we have discussed earlier, is also very similar. Yet another example, which we have discussed in Rotations in Three Dimensions, is that of the $3$-dimensional real projective space $\mathbb{RP}^{3}$ (which is also known in the theory of Lie groups as $\text{SO}(3)$), whose universal covering space is the $3$-sphere $S^{3}$(which is also known as $\text{SU}(2)$). Similar to the above examples, we can think of $\mathbb{RP}^{3}$ as a quotient space obtained from $S^{3}$ by identifying antipodal points (which are “opposite” points on the sphere which can be connected by a straight line passing through the center) on the sphere. From all these examples, we see that we can think of the universal covering space as being some sort of “unfolding” of the quotient space.

A perhaps more abstract way to think of the universal covering space is as the space whose points correspond to homotopy classes (see Homotopy Theory) of paths which start at a certain fixed basepoint (but is free to end on some other point). The set of these endpoints themselves correspond to the points of the topological space which is to be covered. However, we can get to the same endpoint through different paths which are not homotopic, i.e. they cannot be deformed into each other. If we construct a topological space whose points correspond to the homotopy classes of these paths, we will obtain a simply connected space, which is the universal covering space of our topological space.

We now go back to the definition of the fundamental group as the group of deck transformations of the universal covering space. Any covering space (of the same topological space) has its own group of deck transformations, and similar to how covering spaces can be covered by other covering spaces (and they are all covered by the universal covering space), the group of deck transformations of a covering space are also subgroups of the group of deck transformations of the covering space that covers the other covering space, and all the groups of deck transformations of covering spaces of the topological space are subgroups of the fundamental group (since it is the group of deck transformations of the universal covering space which covers all the other covering spaces of the topological space). In other words, the way that the covering spaces cover each other is reflected in the group structure of the fundamental group. This is reminiscent of the theory of Galois groups, where the group structure of the Galois group can shed light into the way certain fields are contained in other fields. This is the analogy mentioned earlier, and it has inspired many fruitful ideas in modern mathematics  – for instance, it was one of the inspirations for the idea of the Grothendieck topos (see More Category Theory: The Grothendieck Topos).

References:

Fundamental group on Wikipedia

Covering Space on Wikipedia

Image by User Yonatan of Wikipedia

Image by User Pappus of Wikipedia

Coverings of the Circle on Youtube

Algebraic Topology by Allen Hatcher

A Concise Course in Algebraic Topology by J. P. May

Universal Covers on The Princeton Companion to Mathematics by Timothy Gowers, June Barrow-Green, and Imre Leader

Sheaves in Geometry and Logic: A First Introduction to Topos Theory by Saunders Mac Lane and Ieke Moerdijk

# Rotations in Three Dimensions

In Rotating and Reflecting Vectors Using Matrices we learned how to express rotations in $2$-dimensional space using certain special $2\times 2$ matrices which form a group (see Groups) we call the special orthogonal group in dimension $2$, or $\text{SO}(2)$ (together with other matrices which express reflections, they form a bigger group that we call the orthogonal group in $2$ dimensions, or $\text{O}(2)$).

In this post, we will discuss rotations in $3$-dimensional space. As we will soon see, notations in $3$-dimensional space have certain interesting features not present in the $2$-dimensional case, and despite being seemingly simple and mundane, play very important roles in some of the deepest aspects of fundamental physics.

We will first discuss rotations in $3$-dimensional space as represented by the special orthogonal group in dimension $3$, written as $\text{SO}(3)$.

We recall some relevant terminology from Rotating and Reflecting Vectors Using Matrices. A matrix is called orthogonal if it preserves the magnitude of (real) vectors. The magnitude of the vector $v$ must be equal to the magnitude of the vector $Av$, for a matrix $A$, to be orthogonal. Alternatively, we may require, for the matrix $A$ to be orthogonal, that it satisfy the condition

$\displaystyle AA^{T}=A^{T}A=I$

where $A^{T}$ is the transpose of $A$ and $I$ is the identity matrix. The word “special” denotes that our matrices must have determinant equal to $1$. Therefore, the group $\text{SO}(3)$ consists of the $3\times3$ orthogonal matrices whose determinant is equal to $1$.

The idea of using the group $\text{SO}(3)$ to express rotations in $3$-dimensional space may be made more concrete using several different formalisms.

One popular formalism is given by the so-called Euler angles. In this formalism, we break down any arbitrary rotation in $3$-dimensional space into three separate rotations. The first, which we write here by $\varphi$, is expressed as a counterclockwise rotation about the $z$-axis. The second, $\theta$, is a counterclockwise rotation about an $x$-axis that rotates along with the object. Finally, the third, $\psi$, is expressed as a counterclockwise rotation about a $z$-axis that, once again, has rotated along with the object. For readers who may be confused, animations of these steps can be found among the references listed at the end of this post.

The matrix which expresses the rotation which is the product of these three rotations can then be written as

$\displaystyle g(\varphi,\theta,\psi) = \left(\begin{array}{ccc} \text{cos}(\varphi)\text{cos}(\psi)-\text{cos}(\theta)\text{sin}(\varphi)\text{sin}(\psi) & -\text{cos}(\varphi)\text{sin}(\psi)-\text{cos}(\theta)\text{sin}(\varphi)\text{cos}(\psi) & \text{sin}(\varphi)\text{sin}(\theta) \\ \text{sin}(\varphi)\text{cos}(\psi)+\text{cos}(\theta)\text{cos}(\varphi)\text{sin}(\psi) & -\text{sin}(\varphi)\text{sin}(\psi)+\text{cos}(\theta)\text{cos}(\varphi)\text{cos}(\psi) & -\text{cos}(\varphi)\text{sin}(\theta) \\ \text{sin}(\psi)\text{sin}(\theta) & \text{cos}(\psi)\text{sin}(\theta) & \text{cos}(\theta) \end{array}\right)$.

The reader may check that, in the case that the rotation is strictly in the $x$$y$ plane, i.e. $\theta$ and $\psi$ are zero, we will obtain

$\displaystyle g(\varphi,\theta,\psi) = \left(\begin{array}{ccc} \text{cos}(\varphi) & -\text{sin}(\varphi) & 0 \\ \text{sin}(\varphi) & \text{cos}(\varphi) & 0 \\ 0 & 0 & 1 \end{array}\right)$.

Note how the upper left part is an element of $\text{SO}(2)$, expressing a counterclockwise rotation by an angle $\varphi$, as we might expect.

Contrary to the case of $\text{SO}(2)$, which is an abelian group, the group $\text{SO}(3)$ is not an abelian group. This means that for two elements $a$ and $b$ of $\text{SO}(3)$, the product $ab$ may not always be equal to the product $ba$. One can check this explicitly, or simply consider rotating an object along different axes; for example, rotating an object first counterclockwise by 90 degrees along the $z$-axis, and then counterclockwise again by 90 degrees along the $x$-axis, will not end with the same result as performing the same operations in the opposite order.

We now know how to express rotations in $3$-dimensional space using $3\times 3$ orthogonal matrices. Now we discuss another way of expressing the same concept, but using “unitary”, instead of orthogonal, matrices. However, first we must revisit rotations in $2$ dimensions.

The group $\text{SO}(2)$ is not the only way we have of expressing rotations in $2$-dimensions. For example, we can also make use of the unitary (we will explain the meaning of this word shortly) group in $1$-dimension, also written $\text{U}(1)$. It is the group formed by the complex numbers with magnitude equal to $1$. The elements of this group can always be written in the form $e^{i\theta}$, where $\theta$ is the angle of our rotation. As we have seen in Connection and Curvature in Riemannian Geometry, this group is related to quantum electrodynamics, as it expresses the gauge symmetry of the theory.

The groups $\text{SO}(2)$ and $\text{U}(1)$ are actually isomorphic. There is a one-to-one correspondence between the elements of $\text{SO}(2)$ and the elements of $\text{U}(1)$ which respects the group operation. In other words, there is a bijective function $f:\text{SO}(2)\rightarrow\text{U}(1)$, which satisfies $ab=f(a)f(b)$ for $a$, $b$ elements of $\text{SO}(2)$. When two groups are isomorphic, we may consider them as being essentially the same group. For this reason, both $\text{SO}(2)$ and $U(1)$ are often referred to as the circle group.

We can now go back to rotations in $3$ dimensions and discuss the group $\text{SU}(2)$, the special unitary group in dimension $2$. The word “unitary” is in some way analogous to “orthogonal”, but applies to vectors with complex number entries.

Consider an arbitrary vector

$\displaystyle v=\left(\begin{array}{c}v_{1}\\v_{2}\\v_{3}\end{array}\right)$.

An orthogonal matrix, as we have discussed above, preserves the quantity (which is the square of what we have referred to earlier as the “magnitude” for vectors with real number entries)

$\displaystyle v_{1}^{2}+v_{2}^{2}+v_{3}^{2}$

while a unitary matrix preserves

$\displaystyle v_{1}^{*}v_{1}+v_{2}^{*}v_{2}+v_{3}^{*}v_{3}$

where $v_{i}^{*}$ denotes the complex conjugate of the complex number $v_{i}$. This is the square of the analogous notion of “magnitude” for vectors with complex number entries.

Just as orthogonal matrices must satisfy the condition

$\displaystyle AA^{T}=A^{T}A=I$,

unitary matrices are required to satisfy the condition

$\displaystyle AA^{\dagger}=A^{\dagger}A=I$

where $A^{\dagger}$ is the Hermitian conjugate of $A$, a matrix whose entries are the complex conjugates of the entries of the transpose $A^{T}$ of $A$.

An element of the group $\text{SU}(2)$ is therefore a $2\times 2$ unitary matrix whose determinant is equal to $1$. Like the group $\text{SO}(3)$, the group $\text{SU}(2)$ is also a group which is not abelian.

Unlike the analogous case in $2$ dimensions, the groups $\text{SO}(3)$ and $\text{SU}(2)$ are not isomorphic. There is no one-to-one correspondence between them. However, there is a homomorphism from $\text{SU}(2)$ to $\text{SO}(3)$ that is “two-to-one”, i.e. there are always two elements of $\text{SU}(2)$ that get mapped to the same element of $\text{SO}(3)$ under this homomorphism. Hence, $\text{SU}(2)$ is often referred to as a “double cover” of $\text{SO}(3)$.

In physics, this concept underlies the weird behavior of quantum-mechanical objects called spinors (such as electrons), which require a rotation of 720, not 360, degrees to return to its original state!

The groups we have so far discussed are not “merely” groups. They also possesses another kind of mathematical structure. They describe certain shapes which happen to have no sharp corners or edges. Technically, such a shape is called a manifold, and it is the object of study of the branch of mathematics called differential geometry, which we have discussed certain basic aspects of in Geometry on Curved Spaces and Connection and Curvature in Riemannian Geometry.

For the circle group, the manifold that it describes is itself a circle. The elements of the circle group correspond to the points of the circle. The group $\text{SU}(2)$ is the surface of the $4$– dimensional sphere, or what we call a $3$-sphere (for those who might be confused by the terminology, recall that we are only considering the surface of the sphere, not the entire volume, and this surface is a $3$-dimensional, not a $4$-dimensional, object). The group $\text{SO}(3)$ is $3$-dimensional real projective space, written $\mathbb{RP}^{3}$. It is a manifold which can be described using the concepts of projective geometry (see Projective Geometry).

A group that is also a manifold is called a Lie group (pronounced like “lee”) in honor of the mathematician Marius Sophus Lie who pioneered much of their study. Lie groups are very interesting objects of study in mathematics because they bring together the techniques of group theory and differential geometry, which teaches us about Lie groups on one hand, and on the other hand also teaches us more about both group theory and differential geometry themselves.

References:

Orthogonal Group on Wikipedia

Rotation Group SO(3) on Wikipedia

Euler Angles on Wikipedia

Unitary Group on Wikipedia

Spinor on Wikipedia

Lie Group on Wikipedia

Real Projective Space on Wikipedia

Algebra by Michael Artin

# Some Useful Links on the Hodge Conjecture, Kahler Manifolds, and Complex Algebraic Geometry

I’m going to be fairly busy in the coming days, so instead of the usual long post, I’m going to post here some links to interesting stuff I’ve found online (related to the subjects stated on the title of this post).

In the previous post, An Intuitive Introduction to String Theory and (Homological) Mirror Symmetry, we discussed Calabi-Yau manifolds (which are special cases of Kahler manifolds) and how their interesting properties, namely their Riemannian, symplectic, and complex aspects figure into the branch of mathematics called mirror symmetry, which is inspired by the famous, and sometimes controversial, proposal for a theory of quantum gravity (and more ambitiously a candidate for the so-called “Theory of Everything”), string theory.

We also mentioned briefly a famous open problem concerning Kahler manifolds called the Hodge conjecture (which was also mentioned in Algebraic Cycles and Intersection Theory). The links I’m going to provide in this post will be related to this conjecture.

As with the post An Intuitive Introduction to String Theory and (Homological) Mirror Symmetry, aside from introducing the subject itself, another of the primary intentions will be to motivate and explore aspects of algebraic geometry such as complex algebraic geometry, and their relation to other branches of mathematics.

Here is the page on the Hodge conjecture, found on the website of the Clay Mathematics Institute:

Hodge Conjecture on Clay Mathematics Institute

We have mentioned before that the Hodge conjecture is one of seven “Millenium Problems” for which the Clay Mathematics Institute is offering a million dollar prize. The page linked to above contains the official problem statement as stated by Pierre Deligne, and a link to a lecture by Dan Freed, which is intended for a general audience and quite understandable. The lecture by Freed is also available on Youtube:

Dan Freed on the Hodge Conjecture at the Clay Mathematics Institute on Youtube

Unfortunately the video of that lecture has messed up audio (although the lecture remains understandable – it’s just that the audio comes out of only one side of the speakers or headphones). Here is another set of videos by David Metzler on Youtube, which explains the Hodge conjecture (along with the other Millennium Problems) to a general audience:

The Hodge conjecture is also related to certain aspects of number theory. In particular, we have the Tate conjecture, which is another conjecture similar to the Hodge conjecture, but more related to Galois groups (see Galois Groups). Alex Youcis discusses it on the following post on his blog Hard Arithmetic:

The Tate Conjecture over Finite Fields on Hard Arithmetic

On the same blog there is also a discussion of a version of the Hodge conjecture called the $p$-adic Hodge conjecture on the following post:

An Invitation to p-adic Hodge Theory; or How I Learned to Stop Worrying and Love Fontaine on Hard Arithmetic

The first part of the post linked to above discusses the Hodge conjecture in its classical form, while the second part introduces $p$-adic numbers and related concepts, some aspects of which were discussed on this blog in Valuations and Completions.

A more technical discussion of the Hodge conjecture, Kahler manifolds, and complex algebraic geometry can be found in the following lecture of Claire Voisin, which is part of the Proceedings of the 2010 International Congress of Mathematicians in Hyderabad, India:

On the Cohomology of Algebraic Varieties by Claire Voisin

More about these subjects will hopefully be discussed on this blog at sometime in the future.

# An Intuitive Introduction to String Theory and (Homological) Mirror Symmetry

String theory is by far the most popular of the current proposals to unify the as of now still incompatible theories of quantum mechanics and general relativity. In this post we will give a short overview of the concepts involved in string theory, but not with the goal of discussing the theory itself in depth (hopefully there will be more posts in the future working towards this task). Instead, we will focus on introducing a very interesting and very beautiful branch of mathematics that arose out of string theory called mirror symmetry. In particular, we will focus on a version of it originally formulated by the mathematician Maxim Kontsevich in 1994 called homological mirror symmetry.

We will start with string theory. String theory started out as a theory of the nuclear forces that held together the protons and electrons in the nucleus of an atom. It was abandoned later on, due to a more successful theory called quantum chromodynamics taking its place. However, it was soon found out that string theory could model the elusive graviton, a particle “carrier” of gravity in the same way that a photon is a particle “carrier” of electromagnetism (the photon is more popularly referred to as a particle of light, but because light itself is an electromagnetic wave, it is also a manifestation of an electromagnetic field), and since then physicists have started developing string theory, no longer in the sole context of nuclear forces, but as a possible candidate for a working theory of quantum gravity.

The incompatibility of quantum mechanics and general relativity (which is currently our accepted theory of gravity) arises from the nonrenormalizability of gravity. In calculations in quantum field theory (see Some Basics of Relativistic Quantum Field Theory and Some Basics of (Quantum) Electrodynamics), there appear certain “nonsensical” quantities which are made sense of via a “corrective” procedure called renormalization (not to be confused with some other procedures called “normalization”). While the way that renormalization works is not really completely understood at the moment, it is known that this procedure at least “works” – this means that it produces the correct values of quantities, as can be checked via experiment.

Renormalization, while it works for the other forces, however fails for gravity. Roughly this is sometimes described as gravity “wildly fluctuating” at the smallest scales. What we know is that this signals, for us, a lack of knowledge of  what physics is like at these extremely small scales (much smaller than the current scale of quantum mechanics).

String theory attempts to solve this conundrum by proposing that particles, at the very smallest scales, are not “particles” at all, but “strings”. This takes care of the problem of fluctuations at the smallest scales, since there is a limit to how small the scale can be, set by the length of the strings. It is perhaps worth noting at this point that the next most popular contender to string theory, loop quantum gravity, tackles this problem by postulating that space itself is not continuous, but “discretized” into units of a certain length. For both theories, this length is predicted to be around $10^{-35}$ meters, a constant quantity which is known as the Planck length.

Over time, as string theory was developed, it became more ambitious, aiming to provide not only the unification of quantum mechanics and general relativity, but also the unification of the four fundamental forces – electromagnetism, the weak nuclear force, the strong nuclear force, and gravity, under one “theory of everything“. At the same time, it needed more ingredients – to be able to account for bosons, the particles carrying “forces”, such as photons and gravitons, and the fermions, particles that make up matter, such as electrons, protons, and neutrons, a new ingredient had to be added, called supersymmetry. In addition, it worked not in the four dimensions of spacetime that we are used to, but instead required ten dimensions (for the “bosonic” string theory, before supersymmetry, the number of dimensions required was a staggering twenty-six)!

How do we explain spacetime having ten dimensions, when we experience only four? It turns out, even before string theory, the idea of extra dimensions was already explored by the physicists Theodor Kaluza and Oskar Klein. They proposed a theory unifying electromagnetism and gravity by postulating an “extra” dimension which was “curled up” into a loop so small we could never notice it. The usual analogy is that of an ant crossing a wire – when the radius of the wire is big, the ant realizes that it can go sideways along the wire, but when the radius of the wire is small, it is as if there is only one dimension that the ant can move along.

So we now have this idea of six curled up dimensions of spacetime, in addition to the usual four. It turns out that there are so many ways that these dimensions can be curled up. This phenomenon is called the string theory landscape, and it is one of the biggest problems facing string theory today. What could be the specific “shape” in which these dimensions are curled up, and why are they not curled up in some other way? Some string theorists answer this by resorting to the controversial idea of a multiverse, so that there are actually several existing universes, each with its own way of how the extra six dimensions are curled up, and we just happen to be in this one because, perhaps, this is the only one where the laws of physics (determined by the way the dimensions are curled up) are able to support life. This kind of reasoning is called the anthropic principle.

In addition to the string theory landscape, there was also the problem of having several different versions of string theory. These problems were perhaps alleviated by the discovery of mysterious dualities. For example, there is the so-called T-duality, where a compactification (a “curling up”) with a bigger radius gives the same laws of physics as a compactification with a smaller, “reciprocal” radius. Not only do the concept of dualities connect the different ways in which the extra dimensions are curled up, they also connect the several different versions of string theory! In 1995, the physicist Edward Witten conjectured that this is perhaps because all these different versions of string theory come from a single “mother theory”, which he called “M-theory“.

In 1991, physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes used these dualities to solve a mathematical problem that had occupied mathematicians for decades, that of counting curves on a certain manifold (a manifold is a shape without sharp corners or edges) known as a Calabi-Yau manifold. In the context of Calabi-Yau manifolds, which are some of the shapes in which the extra dimensions of spacetime are postulated to be curled up, these dualities are known as mirror symmetry. With the success of Candelas, de la Ossa, Green, and Parkes, mathematicians would take notice of mirror symmetry and begin to study it as a subject of its own.

Calabi-Yau manifolds are but special cases of Kahler manifolds, which themselves are very interesting mathematical objects because they can be studied using three aspects of differential geometry – Riemannian geometry, symplectic geometry, and complex geometry.

We have already encountered examples of Kahler manifolds on this blog – they are the elliptic curves (see Elliptic Curves and The Moduli Space of Elliptic Curves). In fact elliptic curves are not only Kahler manifolds but also Calabi-Yau manifolds, and they are the only two-dimensional Calabi-Yau manifolds (we sometimes refer to them as “one-dimensional” when we are considering “complex dimensions”, as is common practice in algebraic geometry – this apparent “discrepancy” in counting dimensions arises because we need two real numbers to specify a complex number). In string theory of course we consider six-dimensional (three-dimensional when considering complex dimensions) Calabi-Yau manifolds, since there are six extra curled up dimensions of spacetime, but often it is also fruitful to study also the other cases, especially the simpler ones, since they can serve as our guide for the study of the more complicated cases.

Riemannian geometry studies Riemannian manifolds, which are manifolds equipped with a metric tensor, which intuitively corresponds to an “infinitesimal distance formula” dependent on where we are on the manifold. We have already encountered Riemannian geometry before in Geometry on Curved Spaces and Connection and Curvature in Riemannian Geometry. There we have seen that Riemannian geometry is very important in the mathematical formulation of general relativity, since in this theory gravity is just the curvature of spacetime, and the metric tensor expresses this curvature by showing how the formula for the infinitesimal distance between two points (actually the infinitesimal spacetime interval between two events) changes as we move around the manifold.

Symplectic geometry, meanwhile, studies symplectic manifolds. If Riemannian manifolds are equipped with a metric tensor that measures “distances”, symplectic manifolds are equipped with a symplectic form that measures “areas”. The origins of symplectic geometry are actually related to William Rowan Hamilton’s formulation of classical mechanics (see Lagrangians and Hamiltonians), as developed later on by Henri Poincare. There the object of study is phase space, which gives the state of a system based on the position and momentum of the objects that comprise it. It is this phase space that is expressed as a symplectic manifold.

Complex geometry, following our pattern, studies complex manifolds. These are manifolds which locally look like $\mathbb{C}^{n}$, in the same way that ordinary differentiable manifolds locally look like $\mathbb{R}^{n}$. Just as Riemannian geometry has metric tensors and symplectic geometry has symplectic forms, complex geometry has complex structures, mappings of tangent spaces with the property that applying them twice is the same as multiplication by $-1$, mimicking the usual multiplication by the imaginary unit $i$ on the complex plane.

Complex manifolds are not only part of differential geometry, they are also often studied using the methods of algebraic geometry! We recall (see Basics of Algebraic Geometry) that algebraic geometry studies varieties and schemes, which are shapes such as lines, conic sections (parabolas, hyperbolas, ellipses, and circles), and elliptic curves, that can be described by polynomials (their modern definitions are generalizations of this concept). In fact, all Calabi-Yau manifolds can be described by polynomials, such as the following example, due to user Andrew J. Hanson of Wikipedia:

This is a visualization (actually a sort of “cross section”, since we can only display two dimensions and this object is actually six-dimensional) of the Calabi-Yau manifold described by the following polynomial equation:

$\displaystyle V^{5}+W^{5}+X^{5}+Y^{5}+Z^{5}=0$

This polynomial equation (known as the Fermat quintic) actually describes the Calabi-Yau manifold  in projective space using homogeneous coordinates. This means that we are using the concepts of projective geometry (see Projective Geometry) to include “points at infinity“.

We note at this point that Kahler manifolds and Calabi-Yau manifolds are interesting in their own right, even outside of the context of string theory. For instance, we have briefly mentioned in Algebraic Cycles and Intersection Theory the Hodge conjecture, one of seven “Millenium Problems” for which the Clay Mathematics Institute is currently offering a million-dollar prize, and it concerns Kahler manifolds. Perhaps most importantly, it “unifies” several different branches of mathematics; as we have already seen, the study of Kahler manifolds and Calabi-Yau manifolds involves Riemannian geometry, symplectic geometry, complex geometry, and algebraic geometry. The more recent version of mirror symmetry called homological mirror symmetry further adds category theory and homological algebra to the mix.

Now what mirror symmetry more specifically states is that a version of string theory called Type IIA string theory, on a spacetime with extra dimensions compactified onto a certain Calabi-Yau manifold $V$, is the same as another version of string theory, called Type IIB string theory, on a spacetime with extra dimensions compactified onto another Calabi-Yau manifold $W$, which is “mirror” to the Calabi-Yau manifold $V$.

The statement of homological mirror symmetry (which is still conjectural, but mathematically proven in certain special cases) expresses the idea of the previous paragraph as follows (quoted verbatim from the paper Homological Algebra of Mirror Symmetry by Maxim Kontsevich):

Let $(V,\omega)$ be a $2n$-dimensional symplectic manifold with $c_{1}(V)=0$ and $W$ be a dual $n$-dimensional complex algebraic manifold.

The derived category constructed from the Fukaya category $F(V)$ (or a suitably enlarged one) is equivalent to the derived category of coherent sheaves on a complex algebraic variety $W$.

The statement makes use of the language of category theory and homological algebra (see Category TheoryMore Category Theory: The Grothendieck ToposEven More Category Theory: The Elementary ToposExact SequencesMore on Chain Complexes, and The Hom and Tensor Functors), but the idea that it basically expresses is that there exists a relation between the symplectic aspects of the Calabi-Yau manifold $V$, as encoded in its Fukaya category, and the complex aspects of the Calabi-Yau manifold $W$, as encoded in its category of coherent sheaves (see Sheaves and More on Sheaves). As we have said earlier, the subjects of algebraic geometry and complex geometry are closely related, and hence the language of sheaves show up in (and is an important part of) both subjects. The concept of derived categories, which generalize derived functors like the Ext and Tor functors, allow us to relate the two categories, which otherwise would be expressing different concepts. Inspired by string theory, therefore, we have now a deep and beautiful idea in geometry, relating its different aspects.

Is string theory the correct way towards a complete theory of quantum gravity, or the so-called “theory of everything”? As of the moment, we don’t know. Quantum gravity is a very difficult problem, and the scales involved are still far out of our reach – in order to probe smaller and smaller scales we need particle accelerators with higher and higher energies, and right now the technologies that we have are still very, very far from the scales which are relevant to quantum gravity. Still, it is hoped for that whatever we find in experiments in the near future, not only in the particle accelerators but also in the radio telescopes that look out into space, will at least guide us towards the correct path.

There are some who believe that, in the absence of definitive experimental evidence, mathematical beauty is our next best guide. And, without a doubt, string theory is related to, and has inspired, some very beautiful and very interesting mathematics, including that which we have discussed in this post. Still, physics, like all natural science, is empirical (based on evidence and observation), and hence it is ultimately physical evidence that will be the judge of correctness. It may yet turn out that string theory is wrong, and that it is a different theory which describes the fundamental physical laws of nature, or that it needs drastic modifications to its ideas. This will not invalidate the mathematics that we have described here, anymore than the discoveries of Copernicus invalidated the mathematics behind the astronomical model of Ptolemy – in fact this mathematics not only outlived the astronomy of Ptolemy, but served the theories of Copernicus, and his successors, just as well. Hence we cannot really say that the efforts of Ptolemy were wasted, since even though his scientific ideas were shown to be wrong, still his mathematical methods were found very useful by those who succeeded him. Thus, while our current technological limitations prohibit us from confirming or ruling out proposals for a theory of quantum gravity such as string theory, there is still much to be gained from such continued efforts on the part of theory, while experiment is still in the process of catching up.

Our search for truth continues. Meanwhile, we have beauty to cultivate.

References:

String Theory on Wikipedia

Mirror Symmetry on Wikipedia

Homological Mirror Symmetry on Wikipedia

Calabi-Yau Manifold on Wikipedia

Kahler Manifold on Wikipedia

Riemannian Geometry on Wikipedia

Symplectic Geometry on Wikipedia

Complex Geometry on Wikipedia

Fukaya Category on Wikipedia

Coherent Sheaf on Wikipedia

Derived Category on Wikipedia

Image by User Andrew J. Hanson of Wikipedia

Homological Algebra of Mirror Symmetry by Maxim Kontsevich

The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory by Brian Greene

String Theory by Joseph Polchinski

String Theory and M-Theory: A Modern Introduction by Katrin Becker, Melanie Becker, and John Schwarz