Calculus Beyond College: Analysis and modelling

This post is motivated by a number of discussions that I had at Cambridge open days last week, when I talked to school students who were interested in doing maths at university, and who may have come across the unfamiliar term ‘analysis’ when looking at our course syllabus.

Mathematical analysis is a very large area, but, broadly speaking, it is the study of limiting processes and of approximations. The basic concept of a limiting process is that, as you make a certain parameter of a problem smaller and smaller (or larger and larger), then an answer that you get, which depends on the parameter, will tend towards a certain value. This idea underlies a lot of the assumptions that we make when we model the real world:

  • All materials are deformable to some extent, but we may assume that a rod or surface is perfectly rigid if the stiffness of its material is sufficiently high, so that any deformations are negligible, compared to the lengthscales of interest in the problem. When considering a block sliding down a slope, we do not care about deformations on a nanometre scale.
  • We might ignore air resistance when studying the motion of a projectile. This approximation works provided that the projectile’s inertia and its weight (due to gravity) dominate the effects of air resistance. Air resistance is proportional to surface area, so the dominance occurs in the limit of the projectile’s surface area being very small.
  • While the wave-like properties of light are more fundamental (they are directly governed by Maxwell’s equations), its particle-like properties come from the limit of the wavelength (about 700nm for red light) being smaller than other lengthscales of interest. This is why a laser beam acts much like a particle when shone across a room (it is localised, and can reflect cleanly off surfaces), while its wavelike properties may be seen in a double-slit experiment involving narrow slits.

These approximations are quite simple to understand and apply, and they give good agreement with empirical results. However, things are not always so straightforward, especially when there are two limiting processes which have opposite effects.

Analysis gives us the tools to study how these competing limiting processes interact with each other. I won’t discuss their resolution in detail, but I will give a few examples below.

Division by zero

Consider the function f(x) = a/x where a is some fixed positive real number. This function is defined for x \neq 0, and it is positive whenever x is positive. When x is positive and very small, f(x) is very large, since x appears in the denominator. In fact, as x gets closer and closer to 0, f(x) will become unbounded. We therefore say that the limit of f(x) as $x$ approaches 0 is infinite: we can write this as

 \displaystyle\lim_{x\rightarrow 0} f(x) = \infty.

Note that we can talk about this limit even though f(0) itself is not actually defined. We talk about the limit of x going to 0, rather than actually setting x = 0, by using the arrow symbol.

But now consider the function g(x) = (x^2 + x) / x , again defined for x \neq 0, since when x = 0 the denominator is zero and division by zero is undefined. This function is also positive whenever x is positive, but it behaves very differently under the limit x \rightarrow 0. In this limit, the numerator also goes to zero. Now 0/0 is undefined, but note that

 g(x) = \displaystyle\frac{x (x + 1)}{x} = x + 1.

Therefore,

 \displaystyle\lim_{x\rightarrow 0} g(x) = 1.

We can also say that g(x) converges to 1 as x tends to 0.

So why not simply define 0/0 as 1? This might seem sensible given that x/x = 1 for all nonzero values of x, but have a think about the similar function h(x) = (x^2 + 3x)/ x. Again, both numerator and denominator go to zero as x\rightarrow 0, but the limit of the fraction is 3, not 1.

Infinite sums

You may be familiar with Zeno’s paradoxes. In order to run 100 metres, one must first run 50 metres, then 25 metres, then 12.5 metres, and so on. That is, one must complete infinitely many tasks, each one of which require a non-zero amount of time. How can this be possible?

The metaphysical implications of this and related paradoxes are still being debated some 2,400 years since Zeno. Mathematically, one has to argue that

 \displaystyle \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1.

While the argument given above, which was known to the ancients, is a good illustration for why the geometric series above should equal 1, it doesn’t help us understand other sums, which can behave in rather different ways. For example, the ancients also knew that the harmonic series

 \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \cdots

is divergent: that is, it cannot be said to take on any particular value. This is because the answer that we get after adding n terms together keeps growing as we increase n. However, the Basel series

 \displaystyle 1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2}

turns out to be convergent; the series takes the value \pi^2 / 6.

In each of these series, as we add on more and more terms, there are two limiting processes at work. The number of terms is tending towards infinity, but the size of each term is tending towards zero. Whether or not a series converges depends on the rate at which the terms converge to zero. The terms in the Basel series drop off faster than the ones in the harmonic series (since the denominators have squares), allowing the Basel series to converge.

The precise conditions needed for a series to converge, as well as methods for calculating or estimating the values of convergent series, are studied in analysis. An understanding of series is useful not just for pure mathematics, but also in many fields of theoretical physics, including quantum mechanics.

‘Ghosts of departed quantities’

Newton discovered calculus in the late 1600s, giving us the notion of an integral as the accumulated change to a quantity, given the small changes made over time. For example, a particle may move continuously with a certain time-dependent velocity. At each instant, the current velocity causes some displacement; the net displacement of the particle is the accumulation of these little displacements.

The traditional definition of the integral is as follows (although Newton himself would not have used the following language or notation). The area under the curve y = f(x) between x = a and x = b is to be denoted as I = \int_a^b f(x) \mathrm{d} x . To evaluate this area, one divides it into a number of rectangles of width \Delta x, plus small ‘errors’ for the bits of the area that do not fit into the rectangles:

 I = f(a) \Delta x + f(a+\Delta x) \Delta x + \cdots + f(b-\Delta x) \Delta x + \text{errors}
 I = \displaystyle \sum f(x_i) \Delta x + \text{errors}

One then makes the rectangles narrower and narrower, taking more and more of them. It is then argued that, as \Delta x gets smaller, the errors will vanish, and the sum approaches the value of the error. The symbol \mathrm{d}x represents an ‘infinitesimal narrowness’; the integral symbol \int is an elongated ‘S’, showing the link between integration and the summation.

Despite giving correct mathematical answers, Newton’s theories were attacked, both on its metaphysical foundations (as with Zeno’s paradox), and on the idea of the errors becoming small. For any nonzero width \Delta x, these errors are present. Then surely, when the width is taken to be infinitesimal but nonzero, then the errors would also be nonzero and infinitesimal?

It turns out that terms such as ‘infinitesimal’ are difficult to use: in the system of real numbers, there is no such thing as an ‘infinitesimal’ number. A more rigorous definition of the integral was given by Riemann almost 200 years after Newton’s calculus. This definition will be studied in a first course in analysis.

Stability theory

Often it is not possible to solve a problem exactly, and it is necessary to make approximations that hold in certain limits. As the mechanical examples showed above, such approximations can be very useful in simplifying a problem, stripping away unnecessary details. However, it is sometimes important to consider what the effect of those details may be – we should be sure that any such effects are negligible compared to the effect that we care about.

Stability theory is the study of how the answer to a problem changes when a small change, or perturbation, is made to the problem. A ball on a sloped surface tends to roll downwards. If the ball is sitting at the bottom of a valley, then a displacement to the ball may cause it to move slightly uphill, but then gravity will act to restore the ball to its original place. This system is said to be stable. On the other hand, a ball at the top of a hill will, if knocked, roll away from that hill and not return; this is an example of an instability.

While this is a trivial example, more complicated instabilities are responsible for many types of pattern formation, such as billows in the clouds, the formation of sand dunes from a seemingly flat desert, and the formation of spots or stripes on leopards and tigers. In biological systems, departures from homeostasis or from a population equilibrium may be mathematically modelled as instabilities. It is important to understand these instabilities, as they can lead respectively to disease or extinction.

Analysis provides the language needed to make these above statements precise, as well as methods for determining whether a system of differential equations (for example governing a mechanical or biological system) has stable or unstable behaviour. A particularly important subfield is chaos theory, which considers equations that are highly sensitive to changes to their initial conditions, such as weather systems.

Summary

Infinite or limiting processes, such as series and integrals, can have behaviours that seem mysterious. A first course in analysis will define concepts such as convergence, differentiation and (Riemann) integration in a rigorous way. However, before that is possible, one must look at more basic facts about the system of real numbers, and indeed give a proper definition of this system: it is not enough to think of real numbers simply as extended strings of decimals.

Having placed all this on a firm footing, it is then possible to answer more fundamental questions about calculus, such as ‘Why do differential equations have solutions?’ or ‘Why does the Newton–Raphson method work?’. It also allows us to use approximations more carefully, and stability theory helps us to decide whether the error introduced by an approximation will dramatically change the results of calculations.

Leave a Reply

Your email address will not be published. Required fields are marked *