Calculus Beyond College: Analysis and modelling

This post is motivated by a number of discussions that I had at Cambridge open days last week, when I talked to school students who were interested in doing maths at university, and who may have come across the unfamiliar term ‘analysis’ when looking at our course syllabus.

Mathematical analysis is a very large area, but, broadly speaking, it is the study of limiting processes and of approximations. The basic concept of a limiting process is that, as you make a certain parameter of a problem smaller and smaller (or larger and larger), then an answer that you get, which depends on the parameter, will tend towards a certain value. This idea underlies a lot of the assumptions that we make when we model the real world:

  • All materials are deformable to some extent, but we may assume that a rod or surface is perfectly rigid if the stiffness of its material is sufficiently high, so that any deformations are negligible, compared to the lengthscales of interest in the problem. When considering a block sliding down a slope, we do not care about deformations on a nanometre scale.
  • We might ignore air resistance when studying the motion of a projectile. This approximation works provided that the projectile’s inertia and its weight (due to gravity) dominate the effects of air resistance. Air resistance is proportional to surface area, so the dominance occurs in the limit of the projectile’s surface area being very small.
  • While the wave-like properties of light are more fundamental (they are directly governed by Maxwell’s equations), its particle-like properties come from the limit of the wavelength (about 700nm for red light) being smaller than other lengthscales of interest. This is why a laser beam acts much like a particle when shone across a room (it is localised, and can reflect cleanly off surfaces), while its wavelike properties may be seen in a double-slit experiment involving narrow slits.

These approximations are quite simple to understand and apply, and they give good agreement with empirical results. However, things are not always so straightforward, especially when there are two limiting processes which have opposite effects.

Analysis gives us the tools to study how these competing limiting processes interact with each other. I won’t discuss their resolution in detail, but I will give a few examples below.

Division by zero

Consider the function f(x) = a/x where a is some fixed positive real number. This function is defined for x \neq 0, and it is positive whenever x is positive. When x is positive and very small, f(x) is very large, since x appears in the denominator. In fact, as x gets closer and closer to 0, f(x) will become unbounded. We therefore say that the limit of f(x) as $x$ approaches 0 is infinite: we can write this as

 \displaystyle\lim_{x\rightarrow 0} f(x) = \infty.

Note that we can talk about this limit even though f(0) itself is not actually defined. We talk about the limit of x going to 0, rather than actually setting x = 0, by using the arrow symbol.

But now consider the function g(x) = (x^2 + x) / x , again defined for x \neq 0, since when x = 0 the denominator is zero and division by zero is undefined. This function is also positive whenever x is positive, but it behaves very differently under the limit x \rightarrow 0. In this limit, the numerator also goes to zero. Now 0/0 is undefined, but note that

 g(x) = \displaystyle\frac{x (x + 1)}{x} = x + 1.

Therefore,

 \displaystyle\lim_{x\rightarrow 0} g(x) = 1.

We can also say that g(x) converges to 1 as x tends to 0.

So why not simply define 0/0 as 1? This might seem sensible given that x/x = 1 for all nonzero values of x, but have a think about the similar function h(x) = (x^2 + 3x)/ x. Again, both numerator and denominator go to zero as x\rightarrow 0, but the limit of the fraction is 3, not 1.

Infinite sums

You may be familiar with Zeno’s paradoxes. In order to run 100 metres, one must first run 50 metres, then 25 metres, then 12.5 metres, and so on. That is, one must complete infinitely many tasks, each one of which require a non-zero amount of time. How can this be possible?

The metaphysical implications of this and related paradoxes are still being debated some 2,400 years since Zeno. Mathematically, one has to argue that

 \displaystyle \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1.

While the argument given above, which was known to the ancients, is a good illustration for why the geometric series above should equal 1, it doesn’t help us understand other sums, which can behave in rather different ways. For example, the ancients also knew that the harmonic series

 \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \cdots

is divergent: that is, it cannot be said to take on any particular value. This is because the answer that we get after adding n terms together keeps growing as we increase n. However, the Basel series

 \displaystyle 1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2}

turns out to be convergent; the series takes the value \pi^2 / 6.

In each of these series, as we add on more and more terms, there are two limiting processes at work. The number of terms is tending towards infinity, but the size of each term is tending towards zero. Whether or not a series converges depends on the rate at which the terms converge to zero. The terms in the Basel series drop off faster than the ones in the harmonic series (since the denominators have squares), allowing the Basel series to converge.

The precise conditions needed for a series to converge, as well as methods for calculating or estimating the values of convergent series, are studied in analysis. An understanding of series is useful not just for pure mathematics, but also in many fields of theoretical physics, including quantum mechanics.

‘Ghosts of departed quantities’

Newton discovered calculus in the late 1600s, giving us the notion of an integral as the accumulated change to a quantity, given the small changes made over time. For example, a particle may move continuously with a certain time-dependent velocity. At each instant, the current velocity causes some displacement; the net displacement of the particle is the accumulation of these little displacements.

The traditional definition of the integral is as follows (although Newton himself would not have used the following language or notation). The area under the curve y = f(x) between x = a and x = b is to be denoted as I = \int_a^b f(x) \mathrm{d} x . To evaluate this area, one divides it into a number of rectangles of width \Delta x, plus small ‘errors’ for the bits of the area that do not fit into the rectangles:

 I = f(a) \Delta x + f(a+\Delta x) \Delta x + \cdots + f(b-\Delta x) \Delta x + \text{errors}
 I = \displaystyle \sum f(x_i) \Delta x + \text{errors}

One then makes the rectangles narrower and narrower, taking more and more of them. It is then argued that, as \Delta x gets smaller, the errors will vanish, and the sum approaches the value of the error. The symbol \mathrm{d}x represents an ‘infinitesimal narrowness’; the integral symbol \int is an elongated ‘S’, showing the link between integration and the summation.

Despite giving correct mathematical answers, Newton’s theories were attacked, both on its metaphysical foundations (as with Zeno’s paradox), and on the idea of the errors becoming small. For any nonzero width \Delta x, these errors are present. Then surely, when the width is taken to be infinitesimal but nonzero, then the errors would also be nonzero and infinitesimal?

It turns out that terms such as ‘infinitesimal’ are difficult to use: in the system of real numbers, there is no such thing as an ‘infinitesimal’ number. A more rigorous definition of the integral was given by Riemann almost 200 years after Newton’s calculus. This definition will be studied in a first course in analysis.

Stability theory

Often it is not possible to solve a problem exactly, and it is necessary to make approximations that hold in certain limits. As the mechanical examples showed above, such approximations can be very useful in simplifying a problem, stripping away unnecessary details. However, it is sometimes important to consider what the effect of those details may be – we should be sure that any such effects are negligible compared to the effect that we care about.

Stability theory is the study of how the answer to a problem changes when a small change, or perturbation, is made to the problem. A ball on a sloped surface tends to roll downwards. If the ball is sitting at the bottom of a valley, then a displacement to the ball may cause it to move slightly uphill, but then gravity will act to restore the ball to its original place. This system is said to be stable. On the other hand, a ball at the top of a hill will, if knocked, roll away from that hill and not return; this is an example of an instability.

While this is a trivial example, more complicated instabilities are responsible for many types of pattern formation, such as billows in the clouds, the formation of sand dunes from a seemingly flat desert, and the formation of spots or stripes on leopards and tigers. In biological systems, departures from homeostasis or from a population equilibrium may be mathematically modelled as instabilities. It is important to understand these instabilities, as they can lead respectively to disease or extinction.

Analysis provides the language needed to make these above statements precise, as well as methods for determining whether a system of differential equations (for example governing a mechanical or biological system) has stable or unstable behaviour. A particularly important subfield is chaos theory, which considers equations that are highly sensitive to changes to their initial conditions, such as weather systems.

Summary

Infinite or limiting processes, such as series and integrals, can have behaviours that seem mysterious. A first course in analysis will define concepts such as convergence, differentiation and (Riemann) integration in a rigorous way. However, before that is possible, one must look at more basic facts about the system of real numbers, and indeed give a proper definition of this system: it is not enough to think of real numbers simply as extended strings of decimals.

Having placed all this on a firm footing, it is then possible to answer more fundamental questions about calculus, such as ‘Why do differential equations have solutions?’ or ‘Why does the Newton–Raphson method work?’. It also allows us to use approximations more carefully, and stability theory helps us to decide whether the error introduced by an approximation will dramatically change the results of calculations.

Mathematical hairstyling: Braid groups

At a recent morning coffee meeting, I was idly playing with my hair when this was noticed by a couple of other people. This led to a discussion of different braiding styles and, because we were mathematicians, a discussion of braid theory. I continued to spend a lot of time reading about it. (Nerd-sniped.)

I didn’t know much about braid theory (or indeed group theory) before, but it turned out to be a very rich subject. I remember being introduced to group theory for the first time and finding it very hard to visualise abstract objects like generators, commutators, conjugates or normal subgroups. Braid groups may be a very useful way of introducing these: they can be demonstrated very visually and hands-on.

Continue reading Mathematical hairstyling: Braid groups

Not-so-continuum mechanics: A talk for the LMS

I will be speaking at the London Mathematical Society’s Graduate Students’ Meeting tomorrow morning, on Not-so-continuum mechanics: Mathematical modelling of granular flows. It’s meant to be a gentle introduction to granular phenomena, and I will introduce a basic version of the μ(I) rheology, a fairly (but not universally!) successful description of granular flow. Here’s a practice version of the talk.

The talks are meant to be aimed at `a general mathematical audience’. My talk assumes no mathematical knowledge beyond A-level Mechanics. I’m having some trouble understanding some abstracts for the other talks, and I’m not sure if my talk is just better-aimed at a general audience, or if theirs are, and there are just huge gaps in my mathematical training (which there are: I’ve never done any algebra beyond basic group theory, or any number theory or combinatorics).

Solving ODEs in C

I set my students an exercise: to solve the nonlinear differential equation

\ddot{x} + \sin(x) = A\sin(\omega t).

What follows is an outline of a model solution (but by no means complete, or optimal). I take this opportunity to introduce several other features of C that I haven’t had the chance to talk about in the classes.

This equation is, of course, the (nondimensionalised) equation of motion of a forced pendulum, where x represents the angle that the pendulum makes with the downward vertical. If A = 0 or \omega = 0, then the pendulum is unforced, and executes regular oscillations. The period of these oscillations is approximately equal to 2\pi for small-amplitude oscillations (where the motion is approximately simple harmonic motion), but increases as the amplitude increases. If forcing is turned on, then a second period—that of the forcing—is introduced. The pendulum may still oscillate with its ‘intrinsic’ period, or if the forcing is strong enough then the pendulum may start oscillate with the period of the forcing. If the period of the forcing is close to the intrinsic period of the pendulum, then resonance may occur.

We first write the equation as a system of two equations,

\dot{x} = y \text{ and } \dot{y} = - \sin(x) + \sin(\omega t)

which is first-order. We then solve this system numerically using the Runge-Kutta method. The program diffsolve is meant to do this using a Runge-Kutta solver. Its full source code can be downloaded here. The code, as it is written, is quite general; it can solve any system of two ODEs of the form

\dot{x}=f(t,x,y) \text{ and } \dot{y}=g(t,x,y).

Annotation of the code

After the initial comments and the #includes, we first come across a typedef struct:

typedef struct {
    double amp;
    double omega;
} problem_t;

I declare a type, problem_t, which will store the parameters of the problem, amp and omega. I usually wrap all parameters into a struct like this, because it lets me pass a single variable, of type problem_t, to a function, instead of passing each of the individual parameters. It also means that, if I want to change my program to allow more parameters, I can simply change the declaration of problem_t, rather than changing the definition of every function that needs these parameters. This pays off when we work with problems with ten or twenty parameters.

The first function defined makes use of this:

double forcing(double t, problem_t params) 
{
    return params.amp * sin( params.omega * t );
}

This is used by the function rhs_y, which calculates \dot{y} . (You could do away with the function forcing and have the above inside rhs_y, if you prefer.)

The function rhs_x, which gives the derivative \dot{x} , appears to require arguments that it does not use:

double rhs_x(double t, double x, double y, problem_t params)
{
    return y;
}

Although the arguments t, x and params are not actually used, I have chosen to include them anyway, because in the future I might want to move from solving \dot{x} = y to something more general, of the form \dot{x} = f(t,x,y) .

Now we come to the main function. The first thing we notice is that the function prototype says

int main(int argc, char* argv[]) 

instead of simply

int main()

We want to be able to run our program without having to recompile the program each time we want to change our parameters such as initial conditions. We will type something like

$ ./diffsolve 1 2 100 0.005 1 0

into the command line, in order to specify the different parameters to diffsolve. These are command line arguments, and they are treated by C as arguments to main. The argument argc is the argument count, the number of arguments; and argv is the argument vector, an array of strings. Note that the program’s name, ./diffsolve, is treated as the 0th argument, and so argc is always at least equal to 1.

The arguments are actually read in the following block:

if (argc > 6) 
{
    params.amp   = atof(argv[1]);
    params.omega = atof(argv[2]);
    tmax  = atof(argv[3]);
    dt    = atof(argv[4]);
    x     = atof(argv[5]);
    y     = atof(argv[6]);
}
else
{
    fprintf(stderr, "Usage: %s amp omega tmax dt x0 y0\n", argv[0]);
    exit(-1);
}

If the user has specified at least seven arguments (where the 0th argument is the name of the program), then these arguments are taken. The atof function converts strings to floats (although our variables are declared as doubles; this is fine, because floats can be implicitly cast into doubles).

If the user has not specified enough arguments, then the program gives an error which tells the user how to use it, and exits with a return code of -1, which (by convention) indicates an error. (The exit function is provided in stdlib.h.) The error message is printed using fprintf, which stands for ‘printf to a file’, or ‘print formatted text to a file’. Here the ‘file’ is stderr, the standard error stream. In contrast, the function printf writes to the standard output stream, represented by stdout. By default, both stdout and stderr write to the screen, but the two streams may be separated from each other. It is a good idea to use stderr for errors, warnings or diagnostic messages, and stdout for actual output that you want to process.

The rest of the program is fairly straightforward: the values of t, x and y are updated in a while loop, and at each time step their values are printed (to standard output).

Usage (on Unix-like systems)

After compiling, the command

$ ./diffsolve 1 2 100 0.005 1 0

will print plenty of lines, giving the solution of the system at each timestep.

You probably want to save this information to another file, which you can do by writing

$ ./diffsolve 1 2 100 0.005 1 0 > out.txt

This will write the results to the file out.txt, which you can then process somewhere else. For example, you can use gnuplot to plot the results:

$ gnuplot
gnuplot> set terminal png
gnuplot> set output "out.png"
gnuplot> plot "out.txt" using 1:2 with lines

This produces the image out.png:
out.png

Mathematica modules

In Mathematica, a module is an extended function that may perform several steps. An advantage of a module is that it may declare and define local variables which can be used inside the module but not in the global scope. It is generally good programming practice to keep variables inside as local a scope as possible, because variables in global scope can conflict with each other if you are not careful to keep track of their names.

A module for solving the KdV equation

Below is an example usage of a module. The module solat computes the solution to the KdV equation

u_t + 6uu_x + u_{xxx} = 0

at a desired time tm, given initial data data0. It uses a split-step pseudospectral integrator, whose details, as well as an equivalent code in Matlab, can be found at Wikiwaves. (Note that their Matlab code makes reference to k, the angular wavenumber, whereas this code uses \nu, which is equal to k / (2\pi) .

nx = 128;
xmax = 1;
dx = 1/nx;
xs = Range[0, xmax - dx, dx];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];

u0[x_] := Sin[6 Pi x];
data = u0 /@ xs;

solat[tmax_, data_] := 
 Module[{t = 0, dft = Fourier[data], dt = 0.4/nx^2, dft1 = 0, sol = 0},
  While[t < tmax,
   dt = Min[dt, tmax - t];
   dft1 = dft*Exp[I (2 Pi nus)^3 dt];
   dft = dft1 - 3 I 2 Pi nus dt Fourier[InverseFourier[dft1]^2];
   t += dt;
   ];
  sol = Re[InverseFourier[dft]];
  sol
  ]

We declare the variables t, dft, dt, dft1 and sol as local to the module, and initialise them to sensible values. (Note that dft1 and sol has been initialised to 0; this is completely arbitrary, because this value will be overwritten a few lines later.)

The last line inside the module, sol, simply causes sol to be returned from the module.

A module for solving the sine-Gordon equation

We now consider the sine-Gordon equation

\phi_{tt} - \phi_{xx} + \sin\phi = 0.

Like KdV, the sine-Gordon equation is quasilinear, in the sense that no term contains any nonlinear function of a derivative. Moreover, in both equations there is only one nonlinear term and no inhomogeneities; the rest are linear and may be treated easily. From a computational perspective, the most important difference between the two is that the sine-Gordon equation is second-order in time. We first write it as a pair of coupled first-order equations:

\phi_t = \psi \text{ and } \psi_t = \phi_{xx} - \sin\phi.

This system can now be solved using a split-step pseudospectral method, much like the above, except that we must keep track of both dynamical variables \phi and $\latex \psi$. The derivation of this is a bit messy, but follows the same principle of evolving the linear parts of the equations separately from the nonlinear part. Here is the Mathematica code:

nx = 128;
xmax = 1;
dx = 1/nx;
xs = Range[0, xmax - dx, dx];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];

u0[x_] := Sin[6 Pi x];
data = u0 /@ xs;
sinegordonsol[tmax_, phi0_, psi0_] := Module[
  {t = 0, phift = Fourier[phi0], psift = Fourier[psi0],
   dt = 0.4/nx^2,
   phift1 = 0, psift1 = 0, sol = 0},
  While[t < tmax,
   dt = Min[dt, tmax - t];
   phift1 = phift Cos[dt] + psift Sin[dt];
   psift1 = psift Cos[dt] - phift Sin[dt];
   phift = phift1;
   psift = psift1 - Fourier[Sin[InverseFourier[phift]]] dt;
   t += dt;
   ];
  sol = Re[InverseFourier[phift]];
  sol
  ]

To use it, do something like

sgs = sinegordonsol[1, data, 0*data];
ListLinePlot[Transpose[{xs, sgs}]]

The discrete Fourier transform in Mathematica

The Fourier transform is a very useful and widely used tool in many branches of maths. The continuous Fourier transform, available in Mathematica as FourierTransform, tries to perform the Fourier transform of a symbolic expression. This is useful for solving many problems if the Fourier transform can also be expressed in a nice symbolic form, but such a form is not necessarily available (and indeed the Fourier transform might not exist at all).

In practice, the discrete Fourier transform, which works with a finite set of data points and approximates the continuous Fourier transform, is more important. The DFT is available in Mathematica as Fourier, and the inverse DFT as InverseFourier. Note that Mathematica uses the 1/\sqrt{n} normalisation by default for both the DFT and the IDFT.

Example

Here is an example usage of the DFT in Mathematica:

xmax = 1;
nx = 128;
dx = xmax / nx;
xs = Range[0, xmax - dx, dx];
f[x_] := Exp[Sin[6 Pi x]] + Sin[40 Pi x];
data = Map[f, xs];
ListLinePlot[Transpose[{xs, data}]]

dft = Fourier[data];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];
ListPlot[
Transpose[{nus, Abs[dft]}],
PlotRange -> All]

The first block generates a list data which consists of sampled values of the function f at the values xs, which are 128 values between 0 and 1 (not including the endpoint at 1). These sampled points are plotted.

The second plot calculates the discrete Fourier transform of data (according to the default normalisation). Then the list nus is constructed: these are the wavenumbers that correspond to each element of dft. (With the default normalisation, this is not the angular wavenumber: the wavenumber of \sin(40\pi x) is said to be 20, while the angular wavenumber is 40\pi. The symbol k is often used for the angular wavenumber; I am using \nu for the wavenumber.) We then plot the absolute values of dft against the wavenumbers nus.

Aliasing

Note how the values of \nu_{\max} and \Delta\nu are related to the values of \Delta x and x_{\max}:

\nu_{\max} = \frac{1}{2\Delta x} \text{ and } \Delta\nu = \frac{1}{x_{\max}}.

What this means is the following:

  • If you sample more densely (so \Delta x is smaller), then you can pick up higher frequencies.
  • If you sample for a longer domain (so x_{\max} is larger), then you can pick up lower frequencies.

The highest frequency that you can pick up, $1/2\Delta x$, is called the Nyquist frequency and is equal to half the sampling rate of the data. If your function in fact oscillates at a higher frequency than this, then your sampled data will not pick up that component properly, and many of your results will be inaccurate. This effect is known as aliasing.

Applications

The Fourier transform can be used to solve many different problems, and is particularly useful for solving linear partial differential equations. Numerical PDE solvers that make use of the DFT are part of a class of numerical schemes known as spectral methods. For linear problems where the solution is oscillatory or where a ‘conservation law’ (usually analogous to conservation of mass or momentum) applies, spectral methods tend to be better than finite differences. The DFT can also be useful for quasilinear PDEs, using a family of methods known as pseudospectral methods.

Resources for the C course

For the benefit of my students, here are the codes that I have used in the lessons. The contents of that directory will be updated periodically.

A useful set of tutorials can be found at Programiz or at W3schools.

HackerRank provides a number of challenges or exercises for your programming skills, but you may need to get a bit more familiarity with the language before being able to complete them.

Why write in C?

Having been asked the question ‘Why would you write in C?’, I write here a few words on that. A Google search would recommend articles by people with much more expertise than me, who can go into much greater detail. The book 21st Century C may be useful as well.

Here are some of the reasons that I like C:

  • C is free, in the sense of free speech as well as free beer. The C standard is open, free implementations of C compilers (such as GCC) exist widely, and many of the libraries available are free and open source. You can distribute C code to other people, and know that they will be able to compile and run it, provided that they have a suitable platform. (In contrast, your colleague will only be able to run your Matlab programs if they too have a license for Matlab.)
  • C is a small language. The basic grammar of C consists of only a few keywords, and the standard library provides only a handful of mostly self-explanatory functions.
  • C is portable. A corollary of C’s smallness is that that a C compiler can be implemented relatively simply on many different systems. This is why C is often used for firmware, especially on small or specialised devices such as games consoles.
  • C is well-designed. For example, C has straightforward scoping rules, whereas Matlab (and other scripting languages such as PHP) tend to put almost all variables into a global scope, which increases the likelihood of ‘cross-contamination’ between scripts or different parts of a script. Matlab also has rules on naming conventions or on syntax which are unnecessarily strict; C is not so strict with these petty rules, and allows you to use the right style for each job (although it is up to you as the programmer to decide what that style is).
  • C evolves slowly. Since 1989, changes to standard C have occurred approximately once every ten years; these changes have been very incremental and almost always backwards-compatible. (That said, new libraries are written all the time, and programming styles have changed.) Contrast this with Matlab, where new versions are released twice a year and code written in older versions often fail with newer versions.
  • C is the parent of many other languages. The syntax of C has been inherited by languages such as Java, Python and PHP, and in turn by their descendants. For example, although functions for printing formatted text existed in languages before C, it is C’s printf function whose name and syntax is used by most languages.
  • C is widely used. Because of its age and its portability, C remains a standard language amongst systems programmers. Because C is relatively low-level, allowing (and often mandating) direct control over memory usage, it is used for firmware as well as things such as the Linux kernel. That said, C’s usefulness is not restricted to systems programming; the wide range of libraries available means that it is also used for applications or scientific computing.

On the other hand, C has some downsides. C is hard to learn compared to Matlab, and its low-level nature means that you have to be careful about things like memory management (which is done automatically by languages such as Java).

Creating user interfaces, graphics or plots with C is not immediately easy. All the libraries are there, but you will still have to do quite a lot of work. I usually prefer to have my C applications output data into text files, which I then give to gnuplot or Matlab if I want a plot out of them.

I wish I could give a comparison against Fortran or Python as well as Matlab, as those are other languages that are often used for scientific computing. But I don’t know either of them.