Setting up a simple — and cheap — home backup system

TL;DR: Hardware: USB hard drive attached to a Raspberry Pi. Software: Unison (with some caveats). Internet connectivity: No-IP (for now).

I decided to set up a home backup system. The system should:

  • be accessible across the Internet, not just from home;
  • be suitable for multiple users (so that, for example, my housemate can use it as well);
  • be suitable for syncing between multiple computers, flagging up conflicts;
  • be cross-platform, especially on Linux machines but also Windows and OS X;
  • be ‘self-sufficient’, relying as little as possible on third parties, and being as open and standards-compliant as possible.

There are a lot of hard drives marketed as ‘home cloud’ solutions, which offer the first three points but not the last two. All the ones that I could find make  you use their proprietary software, and given my tribulations with Time Machine, my distrust in black-box systems is once again where it should be. These software usually do not support Linux, and sometimes don’t even support OS X.

On the other extreme, plenty of people on Linux forums such as StackOverflow suggested simply using rsync, which is a bit too bare — in particular, it doesn’t do conflict resolution, just overwrites stuff.

I eventually decided to construct my own system, using a 4 TB USB hard drive and a Raspberry Pi. The total cost, including peripherals for the Pi, was around £160, although I plan to install further drives, including an SSD. The power consumption is under 10 W, which works out as about £1/month, depending on electricity prices.

Here’s how I set it up.

Preparing the system

When the Raspberry Pi turns on, it mostly worked out of the box (although I had some difficulty following the instructions, and couldn’t work out how to turn it on!). I attached the peripherals and plugged it into a TV using an HDMI cable. I changed the default account’s password, changed the hostname (calling it resilience), set up an account for myself and one for my housemate, and turned on SSH. The Wi-Fi adaptor that came with it doesn’t seem to work (and I can’t figure out why), so I connected it to the router via Ethernet. This required moving the Pi away from the TV, which is why I set up SSH first.

I prepared (a partition on) the drive by formatting it as ext4, and connected it to the Pi. I edited /etc/fstab so that the drive would be automatically mounted on startup. This particular drive is called asclepius and is mounted on /media/asclepius. I created a subdirectory for each user, making sure to set ownership and group-ownership as necessary.

The drive needs to have its own power supply, either built-in or by using a USB hub. The Pi by itself is unable to supply the required power through a USB connection.

Synchronisation software

Unison seems to provide many of the features that I need, and a reasonably friendly interface (although I haven’t tried the GUI interface, or on Windows) as well as good documentation.

When installing Unison, one has to note that different versions are not cross-compatible. The Pi’s Raspbian repositories, as well as the machines at DAMTP, currently offer version 2.40.102. For my laptop, the Ubuntu repositories offer a later one, so I had to build version 2.40.102 myself. This wasn’t too difficult.

Addendum (7 November 2017): It turns out that same versions of Unison may be incompatible with each other if they were compiled using different versions of OCaml, due to a change in OCaml’s serialisation format between versions 4.01 and 4.02 (and there’s no guarantee that further changes won’t happen). I therefore decided to also build ocamlopt from source instead of relying on the repositories, going with version 4.02.

I’m not sure what got updated in the Ubuntu repositories to break my setup, but there seem to have been similar problems with Homebrew users in the past. Annoyingly, because the serialisation formats have differed, my reinstall of Unison has wiped the synchroniser state, so that the lengthy process of state detection must be repeated.

Therefore, let this be a lesson: When creating a new protocol, define your own file format properly, rather than relying on third-party algorithms.

Connecting to the outside world

In order to make resilience accessible from the outside world, I had to configure our router to forward SSH connections on Port 22 to resilience.

Like most residential connections, we are on DHCP and so our IP address changes from time to time. To get around this, I am temporarily using the free No-IP service, although we should contact our ISP to request a static IP address.

The connection to the outside world is quite slow, since it is going through a residential connection. The connection is sometimes unreliable and might break once every few hours, but Unison apparently handles interruptions well.

Further notes

Unison has a number of features for customisation, which I haven’t fully explored yet.

It might be useful, especially if I have more than two users on the system, to set up quotas on the disc. Alternatively, each person could supply their own disc.

Calculus Beyond College: Analysis and modelling

This post is motivated by a number of discussions that I had at Cambridge open days last week, when I talked to school students who were interested in doing maths at university, and who may have come across the unfamiliar term ‘analysis’ when looking at our course syllabus.

Mathematical analysis is a very large area, but, broadly speaking, it is the study of limiting processes and of approximations. The basic concept of a limiting process is that, as you make a certain parameter of a problem smaller and smaller (or larger and larger), then an answer that you get, which depends on the parameter, will tend towards a certain value. This idea underlies a lot of the assumptions that we make when we model the real world:

  • All materials are deformable to some extent, but we may assume that a rod or surface is perfectly rigid if the stiffness of its material is sufficiently high, so that any deformations are negligible, compared to the lengthscales of interest in the problem. When considering a block sliding down a slope, we do not care about deformations on a nanometre scale.
  • We might ignore air resistance when studying the motion of a projectile. This approximation works provided that the projectile’s inertia and its weight (due to gravity) dominate the effects of air resistance. Air resistance is proportional to surface area, so the dominance occurs in the limit of the projectile’s surface area being very small.
  • While the wave-like properties of light are more fundamental (they are directly governed by Maxwell’s equations), its particle-like properties come from the limit of the wavelength (about 700nm for red light) being smaller than other lengthscales of interest. This is why a laser beam acts much like a particle when shone across a room (it is localised, and can reflect cleanly off surfaces), while its wavelike properties may be seen in a double-slit experiment involving narrow slits.

These approximations are quite simple to understand and apply, and they give good agreement with empirical results. However, things are not always so straightforward, especially when there are two limiting processes which have opposite effects.

Analysis gives us the tools to study how these competing limiting processes interact with each other. I won’t discuss their resolution in detail, but I will give a few examples below.

Division by zero

Consider the function f(x) = a/x where a is some fixed positive real number. This function is defined for x \neq 0, and it is positive whenever x is positive. When x is positive and very small, f(x) is very large, since x appears in the denominator. In fact, as x gets closer and closer to 0, f(x) will become unbounded. We therefore say that the limit of f(x) as $x$ approaches 0 is infinite: we can write this as

 \displaystyle\lim_{x\rightarrow 0} f(x) = \infty.

Note that we can talk about this limit even though f(0) itself is not actually defined. We talk about the limit of x going to 0, rather than actually setting x = 0, by using the arrow symbol.

But now consider the function g(x) = (x^2 + x) / x , again defined for x \neq 0, since when x = 0 the denominator is zero and division by zero is undefined. This function is also positive whenever x is positive, but it behaves very differently under the limit x \rightarrow 0. In this limit, the numerator also goes to zero. Now 0/0 is undefined, but note that

 g(x) = \displaystyle\frac{x (x + 1)}{x} = x + 1.

Therefore,

 \displaystyle\lim_{x\rightarrow 0} g(x) = 1.

We can also say that g(x) converges to 1 as x tends to 0.

So why not simply define 0/0 as 1? This might seem sensible given that x/x = 1 for all nonzero values of x, but have a think about the similar function h(x) = (x^2 + 3x)/ x. Again, both numerator and denominator go to zero as x\rightarrow 0, but the limit of the fraction is 3, not 1.

Infinite sums

You may be familiar with Zeno’s paradoxes. In order to run 100 metres, one must first run 50 metres, then 25 metres, then 12.5 metres, and so on. That is, one must complete infinitely many tasks, each one of which require a non-zero amount of time. How can this be possible?

The metaphysical implications of this and related paradoxes are still being debated some 2,400 years since Zeno. Mathematically, one has to argue that

 \displaystyle \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1.

While the argument given above, which was known to the ancients, is a good illustration for why the geometric series above should equal 1, it doesn’t help us understand other sums, which can behave in rather different ways. For example, the ancients also knew that the harmonic series

 \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \cdots

is divergent: that is, it cannot be said to take on any particular value. This is because the answer that we get after adding n terms together keeps growing as we increase n. However, the Basel series

 \displaystyle 1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2}

turns out to be convergent; the series takes the value \pi^2 / 6.

In each of these series, as we add on more and more terms, there are two limiting processes at work. The number of terms is tending towards infinity, but the size of each term is tending towards zero. Whether or not a series converges depends on the rate at which the terms converge to zero. The terms in the Basel series drop off faster than the ones in the harmonic series (since the denominators have squares), allowing the Basel series to converge.

The precise conditions needed for a series to converge, as well as methods for calculating or estimating the values of convergent series, are studied in analysis. An understanding of series is useful not just for pure mathematics, but also in many fields of theoretical physics, including quantum mechanics.

‘Ghosts of departed quantities’

Newton discovered calculus in the late 1600s, giving us the notion of an integral as the accumulated change to a quantity, given the small changes made over time. For example, a particle may move continuously with a certain time-dependent velocity. At each instant, the current velocity causes some displacement; the net displacement of the particle is the accumulation of these little displacements.

The traditional definition of the integral is as follows (although Newton himself would not have used the following language or notation). The area under the curve y = f(x) between x = a and x = b is to be denoted as I = \int_a^b f(x) \mathrm{d} x . To evaluate this area, one divides it into a number of rectangles of width \Delta x, plus small ‘errors’ for the bits of the area that do not fit into the rectangles:

 I = f(a) \Delta x + f(a+\Delta x) \Delta x + \cdots + f(b-\Delta x) \Delta x + \text{errors}
 I = \displaystyle \sum f(x_i) \Delta x + \text{errors}

One then makes the rectangles narrower and narrower, taking more and more of them. It is then argued that, as \Delta x gets smaller, the errors will vanish, and the sum approaches the value of the error. The symbol \mathrm{d}x represents an ‘infinitesimal narrowness’; the integral symbol \int is an elongated ‘S’, showing the link between integration and the summation.

Despite giving correct mathematical answers, Newton’s theories were attacked, both on its metaphysical foundations (as with Zeno’s paradox), and on the idea of the errors becoming small. For any nonzero width \Delta x, these errors are present. Then surely, when the width is taken to be infinitesimal but nonzero, then the errors would also be nonzero and infinitesimal?

It turns out that terms such as ‘infinitesimal’ are difficult to use: in the system of real numbers, there is no such thing as an ‘infinitesimal’ number. A more rigorous definition of the integral was given by Riemann almost 200 years after Newton’s calculus. This definition will be studied in a first course in analysis.

Stability theory

Often it is not possible to solve a problem exactly, and it is necessary to make approximations that hold in certain limits. As the mechanical examples showed above, such approximations can be very useful in simplifying a problem, stripping away unnecessary details. However, it is sometimes important to consider what the effect of those details may be – we should be sure that any such effects are negligible compared to the effect that we care about.

Stability theory is the study of how the answer to a problem changes when a small change, or perturbation, is made to the problem. A ball on a sloped surface tends to roll downwards. If the ball is sitting at the bottom of a valley, then a displacement to the ball may cause it to move slightly uphill, but then gravity will act to restore the ball to its original place. This system is said to be stable. On the other hand, a ball at the top of a hill will, if knocked, roll away from that hill and not return; this is an example of an instability.

While this is a trivial example, more complicated instabilities are responsible for many types of pattern formation, such as billows in the clouds, the formation of sand dunes from a seemingly flat desert, and the formation of spots or stripes on leopards and tigers. In biological systems, departures from homeostasis or from a population equilibrium may be mathematically modelled as instabilities. It is important to understand these instabilities, as they can lead respectively to disease or extinction.

Analysis provides the language needed to make these above statements precise, as well as methods for determining whether a system of differential equations (for example governing a mechanical or biological system) has stable or unstable behaviour. A particularly important subfield is chaos theory, which considers equations that are highly sensitive to changes to their initial conditions, such as weather systems.

Summary

Infinite or limiting processes, such as series and integrals, can have behaviours that seem mysterious. A first course in analysis will define concepts such as convergence, differentiation and (Riemann) integration in a rigorous way. However, before that is possible, one must look at more basic facts about the system of real numbers, and indeed give a proper definition of this system: it is not enough to think of real numbers simply as extended strings of decimals.

Having placed all this on a firm footing, it is then possible to answer more fundamental questions about calculus, such as ‘Why do differential equations have solutions?’ or ‘Why does the Newton–Raphson method work?’. It also allows us to use approximations more carefully, and stability theory helps us to decide whether the error introduced by an approximation will dramatically change the results of calculations.

Mathematical hairstyling: Braid groups

At a recent morning coffee meeting, I was idly playing with my hair when this was noticed by a couple of other people. This led to a discussion of different braiding styles and, because we were mathematicians, a discussion of braid theory. I continued to spend a lot of time reading about it. (Nerd-sniped.)

I didn’t know much about braid theory (or indeed group theory) before, but it turned out to be a very rich subject. I remember being introduced to group theory for the first time and finding it very hard to visualise abstract objects like generators, commutators, conjugates or normal subgroups. Braid groups may be a very useful way of introducing these: they can be demonstrated very visually and hands-on.

Continue reading Mathematical hairstyling: Braid groups

Not-so-continuum mechanics: A talk for the LMS

I will be speaking at the London Mathematical Society’s Graduate Students’ Meeting tomorrow morning, on Not-so-continuum mechanics: Mathematical modelling of granular flows. It’s meant to be a gentle introduction to granular phenomena, and I will introduce a basic version of the μ(I) rheology, a fairly (but not universally!) successful description of granular flow. Here’s a practice version of the talk.

The talks are meant to be aimed at `a general mathematical audience’. My talk assumes no mathematical knowledge beyond A-level Mechanics. I’m having some trouble understanding some abstracts for the other talks, and I’m not sure if my talk is just better-aimed at a general audience, or if theirs are, and there are just huge gaps in my mathematical training (which there are: I’ve never done any algebra beyond basic group theory, or any number theory or combinatorics).

Solving ODEs in C

I set my students an exercise: to solve the nonlinear differential equation

\ddot{x} + \sin(x) = A\sin(\omega t).

What follows is an outline of a model solution (but by no means complete, or optimal). I take this opportunity to introduce several other features of C that I haven’t had the chance to talk about in the classes.

This equation is, of course, the (nondimensionalised) equation of motion of a forced pendulum, where x represents the angle that the pendulum makes with the downward vertical. If A = 0 or \omega = 0, then the pendulum is unforced, and executes regular oscillations. The period of these oscillations is approximately equal to 2\pi for small-amplitude oscillations (where the motion is approximately simple harmonic motion), but increases as the amplitude increases. If forcing is turned on, then a second period—that of the forcing—is introduced. The pendulum may still oscillate with its ‘intrinsic’ period, or if the forcing is strong enough then the pendulum may start oscillate with the period of the forcing. If the period of the forcing is close to the intrinsic period of the pendulum, then resonance may occur.

We first write the equation as a system of two equations,

\dot{x} = y \text{ and } \dot{y} = - \sin(x) + \sin(\omega t)

which is first-order. We then solve this system numerically using the Runge-Kutta method. The program diffsolve is meant to do this using a Runge-Kutta solver. Its full source code can be downloaded here. The code, as it is written, is quite general; it can solve any system of two ODEs of the form

\dot{x}=f(t,x,y) \text{ and } \dot{y}=g(t,x,y).

Annotation of the code

After the initial comments and the #includes, we first come across a typedef struct:

typedef struct {
    double amp;
    double omega;
} problem_t;

I declare a type, problem_t, which will store the parameters of the problem, amp and omega. I usually wrap all parameters into a struct like this, because it lets me pass a single variable, of type problem_t, to a function, instead of passing each of the individual parameters. It also means that, if I want to change my program to allow more parameters, I can simply change the declaration of problem_t, rather than changing the definition of every function that needs these parameters. This pays off when we work with problems with ten or twenty parameters.

The first function defined makes use of this:

double forcing(double t, problem_t params) 
{
    return params.amp * sin( params.omega * t );
}

This is used by the function rhs_y, which calculates \dot{y} . (You could do away with the function forcing and have the above inside rhs_y, if you prefer.)

The function rhs_x, which gives the derivative \dot{x} , appears to require arguments that it does not use:

double rhs_x(double t, double x, double y, problem_t params)
{
    return y;
}

Although the arguments t, x and params are not actually used, I have chosen to include them anyway, because in the future I might want to move from solving \dot{x} = y to something more general, of the form \dot{x} = f(t,x,y) .

Now we come to the main function. The first thing we notice is that the function prototype says

int main(int argc, char* argv[]) 

instead of simply

int main()

We want to be able to run our program without having to recompile the program each time we want to change our parameters such as initial conditions. We will type something like

$ ./diffsolve 1 2 100 0.005 1 0

into the command line, in order to specify the different parameters to diffsolve. These are command line arguments, and they are treated by C as arguments to main. The argument argc is the argument count, the number of arguments; and argv is the argument vector, an array of strings. Note that the program’s name, ./diffsolve, is treated as the 0th argument, and so argc is always at least equal to 1.

The arguments are actually read in the following block:

if (argc > 6) 
{
    params.amp   = atof(argv[1]);
    params.omega = atof(argv[2]);
    tmax  = atof(argv[3]);
    dt    = atof(argv[4]);
    x     = atof(argv[5]);
    y     = atof(argv[6]);
}
else
{
    fprintf(stderr, "Usage: %s amp omega tmax dt x0 y0\n", argv[0]);
    exit(-1);
}

If the user has specified at least seven arguments (where the 0th argument is the name of the program), then these arguments are taken. The atof function converts strings to floats (although our variables are declared as doubles; this is fine, because floats can be implicitly cast into doubles).

If the user has not specified enough arguments, then the program gives an error which tells the user how to use it, and exits with a return code of -1, which (by convention) indicates an error. (The exit function is provided in stdlib.h.) The error message is printed using fprintf, which stands for ‘printf to a file’, or ‘print formatted text to a file’. Here the ‘file’ is stderr, the standard error stream. In contrast, the function printf writes to the standard output stream, represented by stdout. By default, both stdout and stderr write to the screen, but the two streams may be separated from each other. It is a good idea to use stderr for errors, warnings or diagnostic messages, and stdout for actual output that you want to process.

The rest of the program is fairly straightforward: the values of t, x and y are updated in a while loop, and at each time step their values are printed (to standard output).

Usage (on Unix-like systems)

After compiling, the command

$ ./diffsolve 1 2 100 0.005 1 0

will print plenty of lines, giving the solution of the system at each timestep.

You probably want to save this information to another file, which you can do by writing

$ ./diffsolve 1 2 100 0.005 1 0 > out.txt

This will write the results to the file out.txt, which you can then process somewhere else. For example, you can use gnuplot to plot the results:

$ gnuplot
gnuplot> set terminal png
gnuplot> set output "out.png"
gnuplot> plot "out.txt" using 1:2 with lines

This produces the image out.png:
out.png

Mathematica modules

In Mathematica, a module is an extended function that may perform several steps. An advantage of a module is that it may declare and define local variables which can be used inside the module but not in the global scope. It is generally good programming practice to keep variables inside as local a scope as possible, because variables in global scope can conflict with each other if you are not careful to keep track of their names.

A module for solving the KdV equation

Below is an example usage of a module. The module solat computes the solution to the KdV equation

u_t + 6uu_x + u_{xxx} = 0

at a desired time tm, given initial data data0. It uses a split-step pseudospectral integrator, whose details, as well as an equivalent code in Matlab, can be found at Wikiwaves. (Note that their Matlab code makes reference to k, the angular wavenumber, whereas this code uses \nu, which is equal to k / (2\pi) .

nx = 128;
xmax = 1;
dx = 1/nx;
xs = Range[0, xmax - dx, dx];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];

u0[x_] := Sin[6 Pi x];
data = u0 /@ xs;

solat[tmax_, data_] := 
 Module[{t = 0, dft = Fourier[data], dt = 0.4/nx^2, dft1 = 0, sol = 0},
  While[t < tmax,
   dt = Min[dt, tmax - t];
   dft1 = dft*Exp[I (2 Pi nus)^3 dt];
   dft = dft1 - 3 I 2 Pi nus dt Fourier[InverseFourier[dft1]^2];
   t += dt;
   ];
  sol = Re[InverseFourier[dft]];
  sol
  ]

We declare the variables t, dft, dt, dft1 and sol as local to the module, and initialise them to sensible values. (Note that dft1 and sol has been initialised to 0; this is completely arbitrary, because this value will be overwritten a few lines later.)

The last line inside the module, sol, simply causes sol to be returned from the module.

A module for solving the sine-Gordon equation

We now consider the sine-Gordon equation

\phi_{tt} - \phi_{xx} + \sin\phi = 0.

Like KdV, the sine-Gordon equation is quasilinear, in the sense that no term contains any nonlinear function of a derivative. Moreover, in both equations there is only one nonlinear term and no inhomogeneities; the rest are linear and may be treated easily. From a computational perspective, the most important difference between the two is that the sine-Gordon equation is second-order in time. We first write it as a pair of coupled first-order equations:

\phi_t = \psi \text{ and } \psi_t = \phi_{xx} - \sin\phi.

This system can now be solved using a split-step pseudospectral method, much like the above, except that we must keep track of both dynamical variables \phi and $\latex \psi$. The derivation of this is a bit messy, but follows the same principle of evolving the linear parts of the equations separately from the nonlinear part. Here is the Mathematica code:

nx = 128;
xmax = 1;
dx = 1/nx;
xs = Range[0, xmax - dx, dx];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];

u0[x_] := Sin[6 Pi x];
data = u0 /@ xs;
sinegordonsol[tmax_, phi0_, psi0_] := Module[
  {t = 0, phift = Fourier[phi0], psift = Fourier[psi0],
   dt = 0.4/nx^2,
   phift1 = 0, psift1 = 0, sol = 0},
  While[t < tmax,
   dt = Min[dt, tmax - t];
   phift1 = phift Cos[dt] + psift Sin[dt];
   psift1 = psift Cos[dt] - phift Sin[dt];
   phift = phift1;
   psift = psift1 - Fourier[Sin[InverseFourier[phift]]] dt;
   t += dt;
   ];
  sol = Re[InverseFourier[phift]];
  sol
  ]

To use it, do something like

sgs = sinegordonsol[1, data, 0*data];
ListLinePlot[Transpose[{xs, sgs}]]

The discrete Fourier transform in Mathematica

The Fourier transform is a very useful and widely used tool in many branches of maths. The continuous Fourier transform, available in Mathematica as FourierTransform, tries to perform the Fourier transform of a symbolic expression. This is useful for solving many problems if the Fourier transform can also be expressed in a nice symbolic form, but such a form is not necessarily available (and indeed the Fourier transform might not exist at all).

In practice, the discrete Fourier transform, which works with a finite set of data points and approximates the continuous Fourier transform, is more important. The DFT is available in Mathematica as Fourier, and the inverse DFT as InverseFourier. Note that Mathematica uses the 1/\sqrt{n} normalisation by default for both the DFT and the IDFT.

Example

Here is an example usage of the DFT in Mathematica:

xmax = 1;
nx = 128;
dx = xmax / nx;
xs = Range[0, xmax - dx, dx];
f[x_] := Exp[Sin[6 Pi x]] + Sin[40 Pi x];
data = Map[f, xs];
ListLinePlot[Transpose[{xs, data}]]

dft = Fourier[data];
numax = 1/(2*dx);
dnu = 1/xmax;
nus = Join[Range[0, numax - dnu, dnu], Range[-numax, -dnu, dnu]];
ListPlot[
Transpose[{nus, Abs[dft]}],
PlotRange -> All]

The first block generates a list data which consists of sampled values of the function f at the values xs, which are 128 values between 0 and 1 (not including the endpoint at 1). These sampled points are plotted.

The second plot calculates the discrete Fourier transform of data (according to the default normalisation). Then the list nus is constructed: these are the wavenumbers that correspond to each element of dft. (With the default normalisation, this is not the angular wavenumber: the wavenumber of \sin(40\pi x) is said to be 20, while the angular wavenumber is 40\pi. The symbol k is often used for the angular wavenumber; I am using \nu for the wavenumber.) We then plot the absolute values of dft against the wavenumbers nus.

Aliasing

Note how the values of \nu_{\max} and \Delta\nu are related to the values of \Delta x and x_{\max}:

\nu_{\max} = \frac{1}{2\Delta x} \text{ and } \Delta\nu = \frac{1}{x_{\max}}.

What this means is the following:

  • If you sample more densely (so \Delta x is smaller), then you can pick up higher frequencies.
  • If you sample for a longer domain (so x_{\max} is larger), then you can pick up lower frequencies.

The highest frequency that you can pick up, $1/2\Delta x$, is called the Nyquist frequency and is equal to half the sampling rate of the data. If your function in fact oscillates at a higher frequency than this, then your sampled data will not pick up that component properly, and many of your results will be inaccurate. This effect is known as aliasing.

Applications

The Fourier transform can be used to solve many different problems, and is particularly useful for solving linear partial differential equations. Numerical PDE solvers that make use of the DFT are part of a class of numerical schemes known as spectral methods. For linear problems where the solution is oscillatory or where a ‘conservation law’ (usually analogous to conservation of mass or momentum) applies, spectral methods tend to be better than finite differences. The DFT can also be useful for quasilinear PDEs, using a family of methods known as pseudospectral methods.

Resources for the C course

For the benefit of my students, here are the codes that I have used in the lessons. The contents of that directory will be updated periodically.

A useful set of tutorials can be found at Programiz or at W3schools.

HackerRank provides a number of challenges or exercises for your programming skills, but you may need to get a bit more familiarity with the language before being able to complete them.