Lagrangians and Hamiltonians

We discussed the Lagrangian and Hamiltonian formulations of physics in My Favorite Equation in Physics, in our discussion of the historical development of classical physics right before the dawn of the revolutionary ideas of relativity and quantum mechanics at the turn of the 20th century. In this post we discuss them further, and more importantly, we provide some examples.

In order to discuss Lagrangians and Hamiltonians we first need to discuss the concept of energy. Energy is a rather abstract concept, but it can perhaps best be described as a certain conserved quantity – historically, this was how energy was thought of, and the motivation for its development under Rene Descartes and Gottfried Wilhelm Liebniz.

Consider for example, a stone at some height $h$ above the ground. From this we can compute a quantity called the potential energy (which we will symbolize by $V$), which is going to be, in our case, given by

$\displaystyle V=mgh$

where $m$ is the mass of the stone and $g$ is the acceleration due to gravity, which close to the surface of the earth can be considered a constant roughly equal to $9.81$ meters per second per second.

As the stone is dropped from that height, it starts to pick up speed. As it height decreases, its potential energy will also decrease. However, it will gain an increase in a certain quantity called the kinetic energy, which we will write as $T$ and define as

$\displaystyle T=\frac{1}{2}mv^{2}$

where $v$ is the magnitude of the velocity. In our case, since we are considering only motion in one dimension, this is simply given by the speed of the stone. At any point in the motion of the stone, however, the sum of the potential energy and the kinetic energy, called the total mechanical energy, stays at the same value. This is because as the amount by which the potential energy decreases is the same as the amount by which the kinetic energy decreases.

The expression for kinetic energy remains the same for any nonrelativistic system. The expression for the potential energy depends on the system, however, and is related to the force as follows:

$\displaystyle F=-\frac{dV}{dx}$.

We now give the definition of the quantity called the Lagrangian (denoted by $L$). It is simply given by

$\displaystyle L=T-V$.

There is a related quantity to the Lagrangian, called the action (denoted by $S$). It is defined as

$\displaystyle S=\int_{t_{1}}^{t_{2}}L dt$.

For a single particle, the Lagrangian depends on the position and the velocity of the particle. More generally, it will depend on the so-called “configuration” of the system, as well as the “rate of change” of this configuration. We will represent these variables by $q$ and $\dot{q}$ respectively (the “dot” notation is the one developed by Isaac Newton to represent the derivative with respect to time; in the notation of Liebniz, which we have used up to now, this is also written as $\frac{dq}{dt}$).

To explicitly show this dependence, we write the Lagrangian as $L(q,\dot{q})$. Therefore we shall write the action as follows:

$\displaystyle S=\int_{t_{1}}^{t_{2}}L(q,\dot{q}) dt$.

The Lagrangian formulation is important because it allows us to make a connection with Fermat’s principle in optics, which is the following statement:

Light always moves in such a way that it minimizes its time of travel.

Essentially, the Lagrangian formulation allows us to restate the good old Newton’s second law of motion as follows:

An object always moves in such a way that it minimizes its action.

In order to make calculations out of this “principle”, we have to make use of the branch of mathematics called the calculus of variations, which was specifically developed to deal with problems such as these. The calculations are fairly involved, but we will end up with the so-called Euler-Lagrange equations:

$\displaystyle \frac{\partial L}{\partial q}-\frac{d}{dt}\frac{\partial L}{\partial\dot{q}}=0$

We are using the notation $\frac{d}{dt}\frac{\partial L}{\partial\dot{q}}$ instead of the otherwise cumbersome notation $\frac{d\frac{\partial L}{\partial\dot{q}}}{dt}$. It is very common notation in physics to write $\frac{d}{dt}$ to refer to the derivative “operator” (see also More Quantum Mechanics: Wavefunctions and Operators).

For a nonrelativistic system, Euler-Lagrange equations are merely a restatement of Newton’s second law; in fact we can plug in the expressions for the Lagrangian, the kinetic energy, and the potential energy we wrote down earlier and end up exactly with $F=ma$.

Why then, go to all the trouble of formulating this new language, just to express something that we are already familiar with? Well, aside from the “aesthetically pleasing” connection with the very elegant Fermat’s principle, there are also numerous advantages to using the Lagrangian formulation. For instance, it exposes the symmetries of the system, as well as its conserved quantities (both of which are very important in modern physics). Also, the configuration is not always simply just the position, which means that it can be used to describe systems more complicated than just a single particle. Using the concept of a Lagrangian density, it can also describe fields like the electromagnetic field.

We make a mention of the role of the Lagrangian formulation  in quantum mechanics. The probability that a system will be found in a certain state (which we write as $|\phi\rangle$) at time $t_{2}$, given that it was in a state $|\psi\rangle$ at time $t_{1}$, is given by (see More Quantum Mechanics: Wavefunctions and Operators)

$\displaystyle |\langle\phi|e^{-iH(t_{2}-t_{1})}|\psi\rangle|^{2}$

where $H$ is the Hamiltonian (more on this later). The quantity

$\displaystyle \langle\phi|e^{-iH(t_{2}-t_{1})}|\psi\rangle$

is called the transition amplitude and can be expressed in terms of the Feynman path integral

$\displaystyle \int e^{iS}Dq$.

This is not an ordinary integral, as may be inferred from the different notation using $Dq$ instead of $dq$. What this means is that we sum the quantity inside the integral, $e^{iS}$, over all “paths” taken by our system. This has the rather mind blowing interpretation that in going from one point to another, a particle takes all paths. One of the best places to learn more about this concept is in the book QED: The Strange Theory of Light and Matter by Richard Feynman. This book is adapted from Feynman’s lectures at the University of Auckland, videos of which are freely and legally available online (see the references below).

We now discuss the Hamiltonian. The Hamiltonian is defined in terms of the Lagrangian $L$ by first defining the conjugate momentum $p$:

$\displaystyle p=\frac{\partial L}{\partial\dot{q}}$.

Then the Hamiltonian $H$ is given by the formula

$\displaystyle H=p\dot{q}-L$.

In contrast to the Lagrangian, which is a function of $q$ and $\dot{q}$, the Hamiltonian is expressed as a function of $q$ and $p$. For many basic examples the Hamiltonian is simply the total mechanical energy, with the kinetic energy $T$ now written in terms of $p$ instead of $\dot{q}$ as follows:

$\displaystyle T=\frac{p^{2}}{2m}$.

The advantage of the Hamiltonian  formulation is that it shows how the state of the system “evolves” over time. This is given by Hamilton’s equations:

$\displaystyle \dot{q}=\frac{\partial H}{\partial p}$

$\displaystyle \dot{p}=-\frac{\partial H}{\partial q}$

These are differential equations which can be solved to know the value of $q$ and $p$ at any instant of time $t$. One can visualize this better by imagining a “phase space” whose coordinates are $q$ and $p$. The state of the system is then given by a point in this phase space, and this point “moves” across the phase space according to Hamilton’s equations.

The Lagrangian and Hamiltonian formulations of classical mechanics may be easily generalized to more than one dimension. We will therefore have several different coordinates $q_{i}$ for the configuration; for the most simple examples, these may refer to the Cartesian coordinates of 3-dimensional space, i.e. $q_{1}=x$$q_{2}=x$$q_{3}=z$. We summarize the important formulas here:

$\displaystyle \frac{\partial L}{\partial q_{i}}-\frac{d}{dt}\frac{\partial L}{\partial\dot{q_{i}}}=0$

$\displaystyle H=\sum_{i}p_{i}\dot{q_{i}}-L$

$\displaystyle \dot{q_{i}}=\frac{\partial H}{\partial p_{i}}$

$\displaystyle \dot{p_{i}}=-\frac{\partial H}{\partial q_{i}}$

In quantum mechanics, the Hamiltonian formulation still plays an important role. As described in More Quantum Mechanics: Wavefunctions and Operators, the Schrodinger equation describes the time evolution of the state of a quantum system in terms of the Hamiltonian. However, in quantum mechanics the Hamiltonian is not just a quantity but an operator, whose eigenvalues usually correspond to the observable values of the energy of the system.

In most modern publications discussing modern physics, the Lagrangian and Hamiltonian formulations are used, in particular for their various advantages. Although we have limited this discussion to nonrelativistic mechanics, in relativity both formulations are still very important. The equations of general relativity, also known as Einstein’s equations, may be obtained by minimizing from the Einstein-Hilbert action. Meanwhile, there also exists a Hamiltonian formulation of general relativity called the Arnowitt-Deser-Misner formalism. Even the proposed candidates for a theory of quantum gravity, string theory and loop quantum gravity, make use of these formulations (the Lagrangian formulation seems to be more dominant in string theory, while the Hamiltonian formulation is more dominant in loop quantum gravity). It is therefore vital that anyone interested in learning about modern physics be at least comfortable in the use of this language.

References:

Lagrangian Mechanics on Wikipedia

Hamiltonian Mechanics on Wikipedia

Path Integral Formulation on Wikipedia

The Douglas Robb Memorial Lectures by Richard Feynman

QED: The Strange Theory of Light and Matter by Richard Feynman

Mechanics by Lev Landau and Evgeny Lifshitz

Classical Mechanics by Herbert Goldstein

My Favorite Equation in Physics

My favorite equation in physics is none other than Newton’s second law of motion, often written as

$\displaystyle F=ma$.

I like to call it the “Nokia 3310 of Physics” – the Nokia 3310 was a popular cellular phone model, back in the older days before smartphones became the norm, and which to this day is still well-known for its reliability and its durability. In the same way, Newton’s second law of motion, although superseded in modern physics by relativity and quantum mechanics, is still quite reliable for its purposes, was historical and groundbreaking for its time, and remains the “gold standard” of physical theories for its simplicity and elegance.

In fact, much of modern physics might be said to be just one long quest to “replace” Newton’s second law of motion when it was found out that it didn’t always hold, for example when things were extremely small, extremely fast, or extremely massive, or any combination of the above. Therefore quantum mechanics was developed to describe the physics of the extremely small, special relativity was developed to describe the physics of the extremely fast, and general relativity was developed to describe the physics of the extremely massive. However, a physical theory that could deal with all of the above – a so-called “theory of everything” – has not been developed yet, although a great deal of research is dedicated to this goal.

This so-called “theory of everything” is of course not literally a theory of “everything”. One can think of it instead as just a really high-powered, upgraded version of Newton’s second law of motion that holds even when things were extremely small, extremely fast, and extremely massive.

(Side note: There’s usually other things we might ask for in a “theory of everything” too. For instance, we usually want the theory to “unify” the four fundamental forces of electromagnetism, the weak nuclear force, the strong nuclear force, and gravity. As far as we currently understand, all the ordinary forces we encounter in everyday life, in fact all the forces we know of in the universe, are just manifestations of these four fundamental forces. It’s a pretty elegant scientific fact, and we want our theory to be even more elegant by unifying all these forces under one concept.)

All this being said, we look at a few aspects of Newton’s second law of motion. Even those who are more interested in the more modern theories of physics, or who want to pursue the quest for the “theory of everything”, might be expected to have a reasonably solid understanding, and more importantly an appreciation, for Newton’s second law.

The meaning of the equation is familiar from high school physics: The acceleration (the change in the velocity with respect to time) of an object is directly proportional to the force applied, in the same direction, and is inversely proportional to the mass of the object. Let’s simplify things for a moment and focus only on one dimension of motion, so we don’t have to worry too much about the direction (except forward/backward or upward/downward, and so on). We also assume that the mass of the object is constant.

First of all, we note that, given the definition of acceleration, Newton’s second law of motion is really a differential equation, expressible in the following form:

$\displaystyle F=m\frac{dv}{dt}$

or, since the velocity $v$ is the derivative $\frac{dx}{dt}$ of the position $x$ with respect to the time $t$, we can also express it as

$\displaystyle F=m\frac{d(\frac{dx}{dt})}{dt}$

or, in more compact notation,

$\displaystyle F=m\frac{d^{2}x}{dt^2}$.

We first discuss this form, which is a differential equation for the position. We will go back to the first form later.

The force $F$ itself may have different forms. One particularly simple form is for the force of gravity exerted by the Earth on objects near its surface. In this case we can use the approximate form $F=-mg$, where $g$ is a constant with a value of around $9.81 \text{m}/\text{s}^{2}$. The minus sign is there for purposes of convention, since this force is always in a downward direction, and we take “up” to be the positive direction. We can then take $x$ to be the height of the object above the ground.

We have in this specific case (called “free fall”) the following expression of Newton’s second law:

$\displaystyle -mg=m\frac{d^{2}x}{dt^2}$

$\displaystyle -g=\frac{d^{2}x}{dt^2}$

We can then apply our knowledge of calculus so that we can obtain an expression telling us how the height object changes over time. We skip the steps and just give the answer here:

$x(t)=-\frac{1}{2}gt^{2}+v_{0}t+x_{0}$

where $x_{0}$ and $v_{0}$ are constants, respectively called the initial position and the initial velocity, which need to be specified before we can give the height of the object above the ground at any time $t$.

We go back to the first form we wrote down above to express Newton’s second law of motion as the following differential equation for the velocity:

$\displaystyle F=m\frac{dv}{dt}$

In the case of free fall, this is

$\displaystyle -mg=m\frac{dv}{dt}$

$\displaystyle -g=\frac{dv}{dt}$

This can be solved to obtain the following expression for the velocity at any time $t$:

$v(t)=-gt+v_{0}$

We collect our results here, and summarize. By solving Newton’s second law of motion for this particular system, using the methods of calculus, we have the two equations for the position and velocity at any time $t$.

$x(t)=-\frac{1}{2}gt^{2}+v_{0}t+x_{0}$

$v(t)=-gt+v_{0}$

But to obtain the position and velocity at any time $t$, we also need two constants $x_{0}$ and $v_{0}$, which we respectively call the initial position and the initial velocity.

In other words, when we know the “specifications” of a system, such as the law of physics that governs it, and the form of the force in our case, and we know the initial position and the initial velocity, then it is possible to know the position and the velocity at any other point in time.

This is a special case of the following question that always appears in physics, whether it is classical mechanics, quantum mechanics, or most other branches of modern physics:

“Given the state of a system at a particular time, in what state will it be at some other time?”

In classical mechanics, the “state” of a system is given by its position and velocity. Equivalently, it may also be given by its position and momentum. In quantum mechanics it is a little different, and there is this concept often referred to as the “wavefunction” which gives the “state” of a system. In elementary contexts the wavefunction may be thought of as giving the probability of a particle to be found at a specific position (more precisely, this probability is the “amplitude squared” of the wavefunction). Since quantum mechanics involves probabilities, a variant of this question is the following:

“Given that a system is in a particular state at some particular time, what is the probability that it will be in some other specified state at some other specified time?”

We now go back to classical mechanics and Newton’s second law, and focus on some historical developments. It is perhaps worth mentioning that before Isaac Newton, Galileo Galilei already had ideas on force and acceleration, evident in his book Two New Sciences. Anyway, Newton’s masterpiece Mathematical Principles of Natural Philosophy was where it was first stated in its most familiar form, and where it was used as one of the ingredients needed to put together Newton’s theory of universal gravitation. It was around this time that the study of mechanics became popular among that era’s greatest thinkers.

Meanwhile, also around this time, another branch of physics was gaining ground in popularity. This was the field of optics, which studied the motion of light just as mechanics studied the motion of more ordinary material objects. Just as Newton’s second law of motion, along with the first and third laws, made up the basics of mechanics, the basic law in optics was given by Fermat’s principle, which is given by the following statement:

Light always moves in such a way that it minimizes its time of travel.

It was the goal of the scientists and mathematicians of that time to somehow “unify” these two seemingly separate branches of physics; this was especially inviting since Fermat’s principle seemed even more elegant than Newton’s second law of motion.

While the physical relationship between light and matter would only be revealed with the advent of quantum mechanics, the scientists and mathematicians of the time were at least able to come up with a language for mechanics analogous to the very elegant statement of Fermat’s principle. This was developed over a long period of time by many historical figures such as Pierre de Maupertuis, Leonhard Euler, and Joseph Louis Lagrange. This was fully accomplished in the 19th century by the mathematician William Rowan Hamilton.

This quest gave us many alternative formulations of Newton’s second law; what it says in terms of physics is exactly the same, but it is written in a more elegant language. Although during the time people had no idea about quantum mechanics or relativity, these formulations would become very useful for expressing these newly discovered laws of physics later on. The first of these is called the Lagrangian formulation, and its statement is the following:

An object always moves in such a way that it minimizes its action.

This “action” is a quantity defined as the integral over time of another quantity called the “Lagrangian” which is usually defined as the difference of the expressions for the kinetic energy and the potential energy, both concepts which are related to the more familiar formulation of Newton (although developed by his often rival Gottfried Wilhelm Liebniz).

Another formulation is called the Hamiltonian formulation, and what it does is give us a way to imagine the time evolution of the “state” of the object (given by its position and momentum) in a “space of states” called the “phase space“. This time evolution is given by a quantity called the Hamiltonian, which is usually defined in terms of the Lagrangian.

The Lagrangian and Hamiltonian formulations of classical mechanics, as we stated earlier, contain no new physics. It is still Newton’s second law of motion. It is still $F=ma$. It is just stated in a new language. However, relativity and quantum mechanics, which do contain new physics, can also be stated in this language. In quantum mechanics, for example, the state at a later time is given by applying a “time evolution operator” defined using the Hamiltonian to a “state vector” representing the current state. Meanwhile, the probability that a certain state will be found in some other specified state can be found using the Feynman path integral, which is defined using the action, or in other words, the Lagrangian.

We have thus reviewed Newton’s second law of motion, one of the oldest laws of physics humankind has discovered since the classical age, and looked at it in the light of newer theories. There will always be new theories, as such is the nature of physics and science as a whole, to evolve, to improve. But there are some ideas in our history that have stood the test of time, and in the cases where they had to be replaced, they have paved the way for their own successors. Such ideas, in my opinion, will always be worth studying no matter how old they become.

References:

Classical Mechanics on Wikipedia

Lagrangian Mechanics on Wikipedia

Hamiltonian Mechanics on Wikipedia

Mechanics by Lev Landau and Evgeny Lifshitz

Classical Mechanics by Herbert Goldstein