The wake of a duck

We all know that when a duck swims steadily on a lake, it leaves a wedge-shaped wake behind it. What’s less obvious is that the wake always has the same angle, about 39°, regardless of the size or speed of the duck, provided that the water is deep enough. The same is true of any other swimmer or ship. This phenomenon was studied by Lord Kelvin over a hundred years ago, and related problems have been of interest to shipmakers ever since.

The full explanation of where the 39° comes from needs quite a lot of physics, and is in fact given in the third year of the maths course at Cambridge! However, the basic idea comes down to a balance between how fast the duck is travelling and how fast surface waves travel. The duck splashes around the water around it and the disturbances travel outwards as waves. If the duck were sitting still, those disturbances would just go outwards as concentric circles. But if the duck is moving, then the centres of these circles move along with the duck:

Figure 1: The wake behind a duck is created by all of the waves generated by the duck at its previous positions interacting with each other.

The big black dot on the right is the current position of the duck. The centres of the circles are previous positions of the duck. For example, the red circle shows the position of waves that were generated when the duck was at the red point. The radius of the circles is equal to the wave speed, multiplied by the time since the duck was at that position.

However, water waves do not have a fixed wave speed, and in this way they are unlike electromagnetic waves in vacuo. Light travelling through a vacuum has a constant wave speed c related to the frequency f and wavelength λ through the equation c = . By convention, it is more typical to work with the angular wavenumber k = 2π/λ and angular frequency ω = 2πf, and rewrite the above formula as ω = ck. (We will now drop the adjective ‘angular’; when we say ‘frequency’ we shall mean ω, not f.) The speed of light is a constant and the frequency is proportional to the wavenumber.

This is not the case for water waves, and it can be shown that the frequency actually depends on the wavenumber according to the formula ω = (gk)1/2, where g is the gravitational acceleration (approximately 10 m s-2). We then define two different wave speeds. The phase velocity is defined as cp = ω/k = (g/k)1/2, and is the speed at which wave crests travel. On the other hand, the group velocity is defined as the derivative cg = dω/dk = (g/k)1/2/2, and is the speed at which wave packets travel. Note that for EM waves the phase and group velocities are the same, but for water waves the group velocity is equal to half the phase velocity.

Figure 2: Phase and group velocity.

This property of non-constant wave speeds is called dispersion and we say that water waves are dispersive, while EM waves in vacuo are non-dispersive. Famously, however, EM waves are dispersive in certain media such as glass, which is why a glass prism can split a beam of white light up into its components of different frequencies and wavenumbers (figure 3).

Figure 3: Light dispersion of a mercury-vapor lamp with a prism made of flint glass. (Credit: Wikipedia user D-Kuru)

I have been a little sloppy with language: the phase and group velocities are velocities and are vectors, not scalars. The direction of the vector is simply the direction of propagation of the wave: radially outwards from the point of generation.

Going back to the duck problem, in order to calculate the radii of the circles in figure 1 we need to identify the wave speed, and therefore the wavenumber, that we care about. The key idea is that the waves at the edge of the wake should move with the same phase velocity as the duck. Otherwise, they would not follow the duck and the wake would not be steady. The radius of the circle, on the other hand, is equal to the group velocity multipled by the elapsed time.

Translating some technical terms into Chinese

I’ve sometimes needed to describe my work to people in Chinese, and have found that there simply isn’t the vocabulary to do so. Hence, I’ve used the following translations. Not all of them are my own invention, and some of them are borrowed from Japanese.

  • granular mechanics: 粒體力學
  • granular rheology: 粒體流變
  • (granular) fingering instability: (粒體)指化 不穩定性
  • granular collapse: 粒體崩塌
  • boundary layer: 邊界層
  • boundary layer separation: 邊界層分離
  • discrete element method (or discrete particle method, discrete particle model): 個別要素法 or 離散要素法 (from Japanese)
  • (non-speech) sound recognition: 非語音辨識
  • data engineering: 數據工程

Permutation-based career

My PhD thesis ended up being largely about discrete particle modelling.

I’ve recently started working in a team of developers working on platforms and data management.

I’m looking forward to my future jobs working on disease management programmes, manic depressive psychosis, the Pakistan Meteorological Department, and the Mouse Phenome Database (which is so cool).

Shakespeare in Anglish

Shall I withsame thee to a summer’s day?
Thou art more lovely and more timeworthly.
Rough winds do shake the Bloommonth buds affray,
And summer’s lease hath all too short a day.
Sometime too hot the eye of heaven shines,
And often is his gold withwoven dimmed.
And every fair from fair sometime bends down,
By luck, or by kind’s ne’er-still rill, untrimmed;
But thy undying summer shall not fade,
Nor lose besitting of that fair thou ow’st,
Nor shall death brag thou wand’rest in his shade,
When in undying lines to Time thou grow’st.
So long as men can breathe, or eyes can see,
So long lives this, and this gives life to thee.

Reflections on the LGBT+ Mathmos community (LGBT+ History Month)

In June 2018, a couple of friends and I started the LGBT+ Mathmos mailing list. Since Michaelmas, we have been running fortnightly social gatherings at the CMS, with tea, coffee and cake being provided very generously by the Faculty. I’m very pleased to see that our events have been attended by people at all levels, from undergraduates to postdocs and lecturers; and both by LGBT+ people and allies.

We founded the group partly in response to DAMTP’s appointment of Professor Aron Wall, but it had been something that I’d wanted to do for a long time. I certainly remember the feeling of isolation, so it’s excellent that this network now exists for people to get together informally and know that they aren’t alone.

Hopefully, we can grow from being a group of friends meeting to drink coffee fortnightly into a fully-fledged student society with speakers, mentoring schemes, or outreach activities. (As far as I can tell, the University does not yet have a society for LGBT+ issues in math or science.) Several undergraduates have already put their names forward to help run things, so I look forward to seeing what they can accomplish with this.

University of Cambridge, Faculty of Mathematics: Celebrating LGBT History Month (since?) February 2019

Me with an LGBT+ flag at the CMS

Historical and philosophical contexts of the calculus of variations

The calculus of variations is concerned with finding functions that extremise (maximise or minimise) a particular quantity. A classic example is the catenary problem. What shape does a chain take when hung between two points? It is the unique shape that minimises the potential energy of the chain; and such a shape is called a catenary, and is given by the cosh function. The idea that the potential energy of a hanging chain should be minimised is a variational principle. Another example of a variational principle is the notion that a soap bubble or water balloon should have a shape that has a minimal surface area, namely a sphere. The variational principles in both examples predict the same shapes as those that one would find by constructing force-balance arguments on line or surface elements, but the variational formulations are far simpler to describe and implement.

The idea that theories might be summarised by neat variational principles had been proposed since antiquity. Such theories are aesthetically pleasing in their simplicity, and in line with the principle of parsimony (or Occam’s razor).

However, there is a major difference between the above examples, and the principle of least action. In the above problems, the independent variables are spatial, and are concerned with a steady state. The principle of least action, which concerns the evolution of particle motions with respect to time, appears to require knowledge about the future. This is metaphysically troubling even today.

Optics and Fermat’s principle

In the early 1600s, a number of scientists, including Willebrord Snellius in 1621, independently discovered an empirical relationship between the angles of incidence and refraction when a beam of light passes through a boundary of different materials, which we now know as Snell’s law. In a 1662 letter, Pierre de Fermat showed that, under certain assumptions about the speed of light in different media, then Snell’s law implies that the path taken by a ray between two given points is that of minimal travel time, and conversely, a ray that takes a path of minimal travel time obeys Snell’s law at the interface. Fermat’s argument, however, assumes that light travels slower in more dense media. We now know this to be true, but actual experimental evidence that light in vacuo travels at a finite speed was not available until 1676.

Fermat’s principle of minimal time was criticised by the prevalent Cartesian school on two grounds. Firstly, the above assumption about the speed of light was unjustified, and not compatible with René Descartes’ notions that that the speed of light in vacuo is infinite, and higher in dense media. (These are not necessarily contradictory statements: the mathematical machinery for comparing infinite or infinitesimal quantities was concurrently being developed, although Newton’s Principia was not yet published and the calculus would not be formalised for another century or two.) A more fundamental criticism of Fermat’s principle was that it is teleological: why does light ‘choose’ to take a time-minimising path, and ‘know’ how to find such a path in advance? Why should it ‘choose’ to minimise travel time and not some other quantity such as distance (which would give a straight line)? Claude Clerselier, a Cartesian critic of Fermat, wrote in reply:

… The principle which you take as the basis for your proof, namely that Nature always acts by using the simplest and shortest paths, is merely a moral, and not a physical one. It is not, and cannot be, the cause of any effect in Nature.

In other words, although Fermat’s principle was mathematically equivalent to Snell’s law, and supported by experiment, it was not considered a satisfactory description of a physical basis behind Snell’s law, as no physical mechanism had been offered.

Particle mechanics and the principle of least action

Newton’s Principia was published in in 1687. After some initial controversy of their own, Newton’s ideas had become accepted by the time of Maupertuis and Euler. Newton’s formulation of particle mechanics, including the law of motion F = ma and the inverse square law for gravitation, gives a mathematical foundation for Kepler’s (empirical) laws of planetary motion.

An important development came in the 1740s with the development of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler. Maupertuis defined action S as an ‘amount of motion’: for a single particle, action is momentum mv multiplied by the distance s travelled; for constant speed, s = vt, so the action is S = mv2t. In the absence of a potential, this matches our modern definition of action, up to a factor of 2. (Maupertuis referred to the quantity mv2 as the vis viva, or ‘living force’, of the particle.) Studying the velocities of two colliding bodies before and after collision, Maupertuis showed that the law of conservation of momentum (by now well-established) is equivalent to the statement that the final velocities are such that the action of this process is minimised.

Euler is generally credited with inventing the calculus of variations in an early form, applying it to studying particle trajectories. (The modern form was later developed by Lagrange, his student, in 1755.) Euler generalised Maupertuis’ definition of action into the modern action integral, and included a new term for potential energy. He showed in 1744 that a particle subject to a central force (such as planetary motion) takes a path (calculated by Newton) that extremises this action, and vice-versa. Lagrange later showed more generally that the principle of least action is mathematically equivalent to Newton’s laws.

But why is this a sensible definition of action? In fact, what is action?

Maupertuis’ reasoning was that ‘Nature is thrifty in all its actions’, positing that action is a sort of ‘effort’. He was happy to attribute the principle of least action as some sort of God trying to minimise the effort of motions in the  universe. But how does one know to choose this definition of action and not some other? As for refraction, why does one minimise travel time and not distance? Maupertuis argues that one cannot know to begin with, but that the correct functional needs to be identified.

Fermat and Euler took a rather weaker view, and refuse to make any metaphysical interpretations about their variational principles. Fermat stated that his principle is ‘a mathematical regularity from which the empirically correct law can be derived’ (Sklar 2012): this is an aesthetic statement about the theory, but says nothing about its origins.

Why do we find the principle of least action problematic?

Everyone agrees that the principle of least action is mathematically equivalent to Newton’s laws of motion, and both have equivalent status when compared against experiments. However, Newton’s laws are specified as differential equations with initial values (‘start in this state, and forward-march in time, with no memory about your past and no information about your future’). In contrast, the principle of least action is formulated as a boundary value problem (‘get from A to B in time T, accumulating as little action as possible’), governed by the Euler–Lagrange equations. Why are we less comfortable with the latter?

One reason is the question: Given that we are at the initial position A, how can we know that we will be at B after time T? This can be resolved by realising that when we solve the Euler–Lagrange equations, we have not been told what the initial velocity is, and have the freedom to choose it such that the final position will be B. Thus, one can convert between an IVP and a BVP: this is the approach taken with the shooting method for solving BVP numerically.

Another reason perhaps is cultural: most of us are taught Newtonian physics before Lagrangian physics. This is paedagogically reasonable: the Newtonian formulation requires far less mathematical machinery. There is also a technical reason for feeling more comfortable with describing physics through an IVP than a BVP: according to the Picard–Lindelöf theorem, an IVP is guaranteed to have a unique solution, at least for a finite domain; a similar guarantee cannot be made for a BVP.

Acknowledgements

The above essay has been guided by Lawrence Sklar’s book, Philosophy and the Foundations of Dynamics.

Type inference for lazy LaTeXing

I am doing some work with asymptotic expansions of the form

 h = h^{(0)} + \epsilon h^{(1)} + O(\epsilon^2)

and I don’t care about second-order terms. The parentheses are there to indicate that these are term labels, not powers. But actually, there’s no need to have them, because if I ever need to raise something to the zeroth power, I can just write 1; and if I need to raise something to the first power, I don’t need to write the power at all. So, there’s no confusion at all by writing h^0 instead of h^{(0)} ! If I need to square it, I can write h^{02}. If I need to square h^{(1)}, then I can write h^{12}; it’s unlikely I’ll need to take anything to the 12th power.

It’s an awful idea and a sane reviewer would reject it, but it does save time when LaTeXing…

Colourblindness and probability

A female acquaintance of mine was recently surprised to find that both of her sons were colourblind, despite neither parent being colourblind. A natural question to ask is ‘What are the odds?’ This question turns out to be open to interpretation, depending on what we mean by probability and odds.

Continue reading Colourblindness and probability

Primary, secondary and ternary sources

I am a bit annoyed that scientists don’t always seem to get the difference between primary, secondary and tertiary sources. Consider this situation:

  • Prince (2008) reports that pigs are approximately blue.
  • Quail (2006), Quaffer (2008) and Qi (2009) use the approximation that pigs are blue.
  • Rout (2012) is a review article discussing the aforementioned works.

Which of the following are valid?

  1. ‘Pigs are approximately blue (Prince 2008).’
  2. ‘Pigs are approximately blue (Quail 2006, Quaffer 2008, Qi 2009).’
  3. ‘We use the approximation that pigs are blue (Prince 2008).’
  4. ‘We use the approximation that pigs are blue (Quail 2006, Quaffer 2008, Qi 2009).’
  5. ‘We use the widely-used approximation that pigs are blue (Quail 2006, Quaffer 2008, Qi 2009).’
  6. ‘We use the widely-used approximation that pigs are blue (Rout 2012).’
  7. ‘The approximation that pigs are blue is widely used (Quail 2006, Quaffer 2008, Qi 2009).’
  8. ‘The approximation that pigs are blue is widely used (Rout 2012).’
  9. ‘Many authors, including Quail (2006), Quaffer (2008) and Qi (2009), use the approximation that pigs are blue.’
  10. ‘Many authors, including Quail (2006), Quaffer (2008) and Qi (2009), use the approximation that pigs are blue (Rout 2012).’