The calculus of variations is concerned with finding functions that extremise (maximise or minimise) a particular quantity. A classic example is the *catenary problem*. What shape does a chain take when hung between two points? It is the unique shape that minimises the potential energy of the chain; and such a shape is called a *catenary*, and is given by the *cosh* function. The idea that the potential energy of a hanging chain should be minimised is a *variational principle*. Another example of a variational principle is the notion that a soap bubble or water balloon should have a shape that has a minimal surface area, namely a sphere. The variational principles in both examples predict the same shapes as those that one would find by constructing force-balance arguments on line or surface elements, but the variational formulations are far simpler to describe and implement.

The idea that theories might be summarised by neat variational principles had been proposed since antiquity. Such theories are aesthetically pleasing in their simplicity, and in line with the principle of parsimony (or Occam’s razor).

However, there is a major difference between the above examples, and the principle of least action. In the above problems, the independent variables are *spatial*, and are concerned with a steady state. The principle of least action, which concerns the evolution of particle motions with respect to *time*, appears to require knowledge about the future. This is metaphysically troubling even today.

## Optics and Fermat’s principle

In the early 1600s, a number of scientists, including Willebrord Snellius in 1621, independently discovered an empirical relationship between the angles of incidence and refraction when a beam of light passes through a boundary of different materials, which we now know as Snell’s law. In a 1662 letter, Pierre de Fermat showed that, under certain assumptions about the speed of light in different media, then Snell’s law implies that the path taken by a ray between two given points is that of minimal travel time, and conversely, a ray that takes a path of minimal travel time obeys Snell’s law at the interface. Fermat’s argument, however, assumes that light travels slower in more dense media. We now know this to be true, but actual experimental evidence that light *in vacuo* travels at a finite speed was not available until 1676.

Fermat’s principle of minimal time was criticised by the prevalent Cartesian school on two grounds. Firstly, the above assumption about the speed of light was unjustified, and not compatible with René Descartes’ notions that that the speed of light *in vacuo* is infinite, and higher in dense media. (These are not necessarily contradictory statements: the mathematical machinery for comparing infinite or infinitesimal quantities was concurrently being developed, although Newton’s *Principia* was not yet published and the calculus would not be formalised for another century or two.) A more fundamental criticism of Fermat’s principle was that it is *teleological*: why does light ‘choose’ to take a time-minimising path, and ‘know’ how to find such a path in advance? Why should it ‘choose’ to minimise travel time and not some other quantity such as distance (which would give a straight line)? Claude Clerselier, a Cartesian critic of Fermat, wrote in reply:

… The principle which you take as the basis for your proof, namely that Nature always acts by using the simplest and shortest paths, is merely a moral, and not a physical one. It is not, and cannot be, the cause of any effect in Nature.

In other words, although Fermat’s principle was mathematically equivalent to Snell’s law, and supported by experiment, it was not considered a satisfactory description of a physical basis behind Snell’s law, as no physical mechanism had been offered.

## Particle mechanics and the principle of least action

Newton’s *Principia* was published in in 1687. After some initial controversy of their own, Newton’s ideas had become accepted by the time of Maupertuis and Euler. Newton’s formulation of particle mechanics, including the law of motion *F = ma* and the inverse square law for gravitation, gives a mathematical foundation for Kepler’s (empirical) laws of planetary motion.

An important development came in the 1740s with the development of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler. Maupertuis defined action *S *as an ‘amount of motion’: for a single particle, action is momentum *mv* multiplied by the distance *s* travelled; for constant speed, *s = vt*, so the action is *S = mv*^{2}*t*. In the absence of a potential, this matches our modern definition of action, up to a factor of 2. (Maupertuis referred to the quantity *mv*^{2} as the *vis viva*, or ‘living force’, of the particle.) Studying the velocities of two colliding bodies before and after collision, Maupertuis showed that the law of conservation of momentum (by now well-established) is equivalent to the statement that the final velocities are such that the action of this process is minimised.

Euler is generally credited with inventing the calculus of variations in an early form, applying it to studying particle trajectories. (The modern form was later developed by Lagrange, his student, in 1755.) Euler generalised Maupertuis’ definition of action into the modern action integral, and included a new term for potential energy. He showed in 1744 that a particle subject to a central force (such as planetary motion) takes a path (calculated by Newton) that extremises this action, and *vice-versa*. Lagrange later showed more generally that the principle of least action is mathematically equivalent to Newton’s laws.

But why is this a sensible definition of action? In fact, *what is action*?

Maupertuis’ reasoning was that ‘Nature is thrifty in all its actions’, positing that action is a sort of ‘effort’. He was happy to attribute the principle of least action as some sort of God trying to minimise the effort of motions in the universe. But how does one know to choose this definition of action and not some other? As for refraction, why does one minimise travel time and not distance? Maupertuis argues that one *cannot* know to begin with, but that the correct functional needs to be identified.

Fermat and Euler took a rather weaker view, and refuse to make any metaphysical interpretations about their variational principles. Fermat stated that his principle is ‘a mathematical regularity from which the empirically correct law can be derived’ (Sklar 2012): this is an *aesthetic* statement about the theory, but says nothing about its origins.

## Why do we find the principle of least action problematic?

Everyone agrees that the principle of least action is mathematically equivalent to Newton’s laws of motion, and both have equivalent status when compared against experiments. However, Newton’s laws are specified as differential equations with initial values (‘start in this state, and forward-march in time, with no memory about your past and no information about your future’). In contrast, the principle of least action is formulated as a boundary value problem (‘get from A to B in time *T*, accumulating as little action as possible’), governed by the Euler–Lagrange equations. Why are we less comfortable with the latter?

One reason is the question: Given that we are at the initial position A, how can we know that we will be at B after time *T*? This can be resolved by realising that when we solve the Euler–Lagrange equations, we have not been told what the initial velocity is, and have the freedom to choose it such that the final position will be B. Thus, one can convert between an IVP and a BVP: this is the approach taken with the shooting method for solving BVP numerically.

Another reason perhaps is cultural: most of us are taught Newtonian physics before Lagrangian physics. This is paedagogically reasonable: the Newtonian formulation requires far less mathematical machinery. There is also a technical reason for feeling more comfortable with describing physics through an IVP than a BVP: according to the Picard–Lindelöf theorem, an IVP is guaranteed to have a unique solution, at least for a finite domain; a similar guarantee cannot be made for a BVP.

## Acknowledgements

The above essay has been guided by Lawrence Sklar’s book, *Philosophy and the Foundations of Dynamics*.

Framed in deistic language, under the Newtonian worldview, ‘God sets up the universe in a particular state and lets it evolve under a set of rules’. Under the Lagrangian worldview, ‘God desires a final state for the universe, and lets the universe take the most efficient route there’. Since the two formulations of classical mechanics are mathematically equivalent, both make the same physical predictions, and cannot distinguish between the two metaphysics.

It’s also important to remember that both of these descriptions are *models* of the universe, used to describe or explain observations and to predict future ones. No assertion is being made about the ‘actual’ mechanics of the universe; only that they are consistent with the predictions of the model – indeed, these models cannot adequately describe observations at very small or very fast scales (governed by quantum mechanics and relativity respectively).

When two models give the same results, we tend to prefer the ‘simpler’ model (per Occam’s razor), out of aesthetics and practical utility. This is not a universal criterion; for example, the Newtonian and Lagrangian descriptions both have applications where each is ‘simpler’ to use than the other.