If you accept the Axiom of Choice, then it is possible to show the existence of a solution. Finding such a solution may be left as a trivial exercise.

# Category: Maths

## Calculus Beyond College: Analysis and modelling

This post is motivated by a number of discussions that I had at Cambridge open days last week, when I talked to school students who were interested in doing maths at university, and who may have come across the unfamiliar term ‘analysis’ when looking at our course syllabus.

Mathematical analysis is a very large area, but, broadly speaking, it is the study of *limiting processes* and of *approximations*. The basic concept of a limiting process is that, as you make a certain parameter of a problem smaller and smaller (or larger and larger), then an answer that you get, which depends on the parameter, will tend towards a certain value. This idea underlies a lot of the assumptions that we make when we model the real world:

- All materials are deformable to some extent, but we may assume that a rod or surface is
*perfectly rigid*if the stiffness of its material is sufficiently high, so that any deformations are negligible, compared to the lengthscales of interest in the problem. When considering a block sliding down a slope, we do not care about deformations on a nanometre scale. - We might ignore air resistance when studying the motion of a projectile. This approximation works provided that the projectile’s inertia and its weight (due to gravity)
*dominate*the effects of air resistance. Air resistance is proportional to surface area, so the dominance occurs in the limit of the projectile’s surface area being very small. - While the wave-like properties of light are more fundamental (they are directly governed by Maxwell’s equations), its particle-like properties come from the limit of the wavelength (about 700nm for red light) being smaller than other lengthscales of interest. This is why a laser beam acts much like a particle when shone across a room (it is localised, and can reflect cleanly off surfaces), while its wavelike properties may be seen in a double-slit experiment involving narrow slits.

These approximations are quite simple to understand and apply, and they give good agreement with empirical results. However, things are not always so straightforward, especially when there are two limiting processes which have opposite effects.

Analysis gives us the tools to study how these competing limiting processes interact with each other. I won’t discuss their resolution in detail, but I will give a few examples below.

## Division by zero

Consider the function where is some fixed positive real number. This function is defined for , and it is positive whenever is positive. When is positive and very small, is very large, since appears in the denominator. In fact, as gets closer and closer to 0, will become unbounded. We therefore say that the *limit* of as $x$ approaches 0 is infinite: we can write this as

Note that we can talk about this limit even though itself is not actually defined. We talk about the limit of *going to* 0, rather than *actually setting* , by using the arrow symbol.

But now consider the function , again defined for , since when the denominator is zero and division by zero is undefined. This function is also positive whenever is positive, but it behaves very differently under the limit . In this limit, the numerator also goes to zero. Now 0/0 is undefined, but note that

Therefore,

We can also say that *converges* to 1 as tends to 0.

So why not simply define 0/0 as 1? This might seem sensible given that for all nonzero values of , but have a think about the similar function . Again, both numerator and denominator go to zero as , but the limit of the fraction is 3, not 1.

## Infinite sums

You may be familiar with Zeno’s paradoxes. In order to run 100 metres, one must first run 50 metres, then 25 metres, then 12.5 metres, and so on. That is, one must complete infinitely many tasks, each one of which require a non-zero amount of time. How can this be possible?

The metaphysical implications of this and related paradoxes are still being debated some 2,400 years since Zeno. Mathematically, one has to argue that

While the argument given above, which was known to the ancients, is a good illustration for why the *geometric series* above should equal 1, it doesn’t help us understand other sums, which can behave in rather different ways. For example, the ancients also knew that the *harmonic series*

is *divergent*: that is, it cannot be said to take on any particular value. This is because the answer that we get after adding terms together keeps growing as we increase . However, the *Basel series*

turns out to be *convergent*; the series takes the value .

In each of these series, as we add on more and more terms, there are two limiting processes at work. The number of terms is tending towards infinity, but the size of each term is tending towards zero. Whether or not a series converges depends on the rate at which the terms converge to zero. The terms in the Basel series drop off faster than the ones in the harmonic series (since the denominators have squares), allowing the Basel series to converge.

The precise conditions needed for a series to converge, as well as methods for calculating or estimating the values of convergent series, are studied in analysis. An understanding of series is useful not just for pure mathematics, but also in many fields of theoretical physics, including quantum mechanics.

## ‘Ghosts of departed quantities’

Newton discovered calculus in the late 1600s, giving us the notion of an *integral* as the accumulated change to a quantity, given the small changes made over time. For example, a particle may move continuously with a certain time-dependent velocity. At each instant, the current velocity causes some displacement; the net displacement of the particle is the accumulation of these little displacements.

The traditional definition of the integral is as follows (although Newton himself would not have used the following language or notation). The area under the curve between and is to be denoted as . To evaluate this area, one divides it into a number of rectangles of width , plus small ‘errors’ for the bits of the area that do not fit into the rectangles:

One then makes the rectangles narrower and narrower, taking more and more of them. It is then argued that, as gets smaller, the errors will vanish, and the sum approaches the value of the error. The symbol represents an ‘infinitesimal narrowness’; the integral symbol is an elongated ‘S’, showing the link between integration and the summation.

Despite giving correct mathematical answers, Newton’s theories were attacked, both on its metaphysical foundations (as with Zeno’s paradox), and on the idea of the errors becoming small. For any nonzero width , these errors are present. Then surely, when the width is taken to be infinitesimal but nonzero, then the errors would also be nonzero and infinitesimal?

It turns out that terms such as ‘infinitesimal’ are difficult to use: in the system of real numbers, there is no such thing as an ‘infinitesimal’ number. A more rigorous definition of the integral was given by Riemann almost 200 years after Newton’s calculus. This definition will be studied in a first course in analysis.

## Stability theory

Often it is not possible to solve a problem exactly, and it is necessary to make approximations that hold in certain limits. As the mechanical examples showed above, such approximations can be very useful in simplifying a problem, stripping away unnecessary details. However, it is sometimes important to consider what the effect of those details may be – we should be sure that any such effects are negligible compared to the effect that we care about.

*Stability theory* is the study of how the answer to a problem changes when a small change, or *perturbation*, is made to the problem. A ball on a sloped surface tends to roll downwards. If the ball is sitting at the bottom of a valley, then a displacement to the ball may cause it to move slightly uphill, but then gravity will act to restore the ball to its original place. This system is said to be *stable*. On the other hand, a ball at the top of a hill will, if knocked, roll away from that hill and not return; this is an example of an *instability*.

While this is a trivial example, more complicated instabilities are responsible for many types of pattern formation, such as billows in the clouds, the formation of sand dunes from a seemingly flat desert, and the formation of spots or stripes on leopards and tigers. In biological systems, departures from homeostasis or from a population equilibrium may be mathematically modelled as instabilities. It is important to understand these instabilities, as they can lead respectively to disease or extinction.

Analysis provides the language needed to make these above statements precise, as well as methods for determining whether a system of differential equations (for example governing a mechanical or biological system) has stable or unstable behaviour. A particularly important subfield is chaos theory, which considers equations that are highly sensitive to changes to their initial conditions, such as weather systems.

## Summary

Infinite or limiting processes, such as series and integrals, can have behaviours that seem mysterious. A first course in analysis will define concepts such as convergence, differentiation and (Riemann) integration in a rigorous way. However, before that is possible, one must look at more basic facts about the system of real numbers, and indeed give a proper definition of this system: it is not enough to think of real numbers simply as extended strings of decimals.

Having placed all this on a firm footing, it is then possible to answer more fundamental questions about calculus, such as ‘Why do differential equations have solutions?’ or ‘Why does the Newton–Raphson method work?’. It also allows us to use approximations more carefully, and stability theory helps us to decide whether the error introduced by an approximation will dramatically change the results of calculations.

## Mathematical hairstyling: Braid groups

At a recent morning coffee meeting, I was idly playing with my hair when this was noticed by a couple of other people. This led to a discussion of different braiding styles and, because we were mathematicians, a discussion of braid theory. I continued to spend a lot of time reading about it. (Nerd-sniped.)

I didn’t know much about braid theory (or indeed group theory) before, but it turned out to be a very rich subject. I remember being introduced to group theory for the first time and finding it very hard to visualise abstract objects like generators, commutators, conjugates or normal subgroups. Braid groups may be a very useful way of introducing these: they can be demonstrated very visually and hands-on.

## A Valentine’s Day message from the shallow water equations

This is the phase plane of a system of equations that describes the evolution of the depth and speed of a shallow current of a granular material when it flows over an obstacle.

## Non-commuting operations

A little tipple *before* marking Quantum Mechanics sheet 3 can be excused as a demonstration to my students about why operators don’t necessarily commute with each other. It remains to be seen what [drink,mark] turns out to be.

## Fluid dynamical chant

I have been invited to give a talk on fluid dynamics at Queens’ Mathematical Society this term. Details are yet to be decided, but the talk will be an introduction to fluids and a focus on boundary layers, including the mathematical notion of a singular limit.

While writing the talk, I thought it might be nice to have a little *fun* with it.

The flow of a | fluid is | governèd

By the interact-si | -on of | four main | forces:

Viscosity * gravity * inert-si- | -a and | pressure.

Depending on the context | one may | dominate an- | -other.In fast flows | or large | lengthscales

Inert-si- | -a is | domin- | -ant

These include many im- | -portant | contexts

Such as household plumbing * oil pipelines * submar- | -ines : and the | upper | atmosphere.Unfortunately * the | limit of | Reynolds number

Going to infinity | is a | singular | limit.

The behaviour of an in- | -viscid | fluid

Is quite different from | that of | one with low vis- | -cosity.

## A problem for the Sorting Hat

My secondary school, CRGS, admits (or used to admit–I’m not sure now) 100 people each year (technically, 96+4). They are to be split into *h* = 4 houses, such that the houses have equal numbers and siblings are in the same house as each other. What happens if they have a year of fifty pairs of twins, or twenty-five sets of quadruplets?

(I believe there’s also a condition on how the four houses should be distributed evenly across the forms, but for simplicity let us ignore it.)

**More serious question:** Let us call an intake *unresolvable* if the two conditions cannot be satisfied. For a given probability distribution of twins, triplets, *etc.*, consider the probability P(*n*) that an intake of *h**n* people will be unresolvable. What values of *n* are local minima of P(*n*)?

## Fractal awfulness

The mathematical justification for the adage ‘never read the comments’ is the notion of ‘fractal awfulness’. This says that there exist arbitrarily petty people that have misguided or bigoted views about decreasingly small communities, and that their inflammatory language and style are all self-similar to each other.

It would be interesting to look for a scaling law between ‘number of people who have/identify as X’ and ‘number of people who have a negative view of people who have/identify as X’.

## A spectral solver for Schrödinger’s equation

Here is a MATLAB program for solving the 1D time-dependent Schrödinger equation:

http://jftsang.com/code/schrodinger_spectral.m

You can specify a potential function and an initial condition, and the solver calculates the wavefunction at future times. Hopefully this will be useful for getting some intuition as to how solutions to Schrödinger’s equation behave.

## Details

The solver uses a spectral method which performs the time-integration exactly. Errors come from the spatial discretisation. The calculation is done in a periodic domain, so edge effects may affect your solution, especially in scattering problems.

## Postprocessing

Fourier-transforming the solution in space tells you about the relative strengths of different wavenumber components of the solution, and therefore about the momentum distribution at each time.

Fourier-transforming in time tells you about the different frequency components in the solution, which you can use to identify energy eigenstates. Note that you might have to calculate the solution for a long time before you have enough periods for the Fourier transform to be precise enough.

## Not-so-continuum mechanics: A talk for the LMS

I will be speaking at the London Mathematical Society’s Graduate Students’ Meeting tomorrow morning, on *Not-so-continuum mechanics: Mathematical modelling of granular flows*. It’s meant to be a gentle introduction to granular phenomena, and I will introduce a basic version of the *μ*(*I*) rheology, a fairly (but not universally!) successful description of granular flow. Here’s a practice version of the talk.

The talks are meant to be aimed at `a general mathematical audience’. My talk assumes no mathematical knowledge beyond A-level Mechanics. I’m having some trouble understanding some abstracts for the other talks, and I’m not sure if my talk is just better-aimed at a general audience, or if theirs are, and there are just huge gaps in my mathematical training (which there are: I’ve never done any algebra beyond basic group theory, or any number theory or combinatorics).