A technique for analysing sequences, from a given sequence f(n), a new sequence df(n) is formed which consists of the difference of successive terms of f(n). This process is iterated and if eventually the resulting sequence is constant, the original sequence can be reconstructed from the finite differences evaluated at some fixed n.

For instance the perfect cubes have the following finite difference table:

0     1      8      27      64      125      216      343      512      729      1000 . . .
   1      7     19      37      61        91       127     169       217      271 . . .
       6     12    18      24       30        36        42        48        54 . . .
          6       6       6        6          6          6         6           6 . . .

The cubes could be reconstructed from the numbers (0,1,6,6) using only addition.

--back to combinatorics--
To calculate derivatives by finite differences is both simple and subtle, like much of numerical analysis. Simple, because it can be done with an AP Calculus knowledge of Taylor series, and subtle, because to do it well takes some thought beyond mechanical manipulation.

Here's an example.

Suppose you have data for y(x) at a bunch of x's, and you want to find y'(x). Begin by writing out a Taylor series around x:

y(x+h) = y(x) + y'(x) h + ...

Solving for y'(x), you quickly get

y'(x) = ( y(x+h) - y(x) ) / h

This is known as forward differencing. Easy enough, right? But wait. What you actually did was

y(x+h) = y(x) + y'(x) h + O(h^2)
y'(x) = ( y(x+h) - y(x) ) / h + O(h)

where O(h) means you left out terms of order h. So using this formula to calculate y' will give you an error that scales like h--to halve your error, you have to halve h. It turns out it's possible to do much better. Instead of stopping at h, go to h^2.

y(x+h) = y(x) + y'(x) h + y''(x) h2/2 + O(h^3)
y(x-h) = y(x) - y'(x) h + y''(x) h2/2 + O(h^3)

If you subtract the second equation from the first, what's left is

y(x+h) - y(x-h) = 2 y'(x) h + O(h3)

Solving for y' gives

y'(x) = ( y(x+h) - y(x-h) + O(h3) )/ 2h = ( y(x+h) - y(x-h) ) / 2h + O(h2)

This is known as central differencing. Notice that the error is now h2--the O(h) errors cancelled each other out! Now, halving your step size will reduce the error by a factor of four.

This basic example of increasing the accuracy of a numerical derivative mirrors to some extent the ideas behind more sophisticated algorithms such as the Runge-Kutta scheme for numerical integration of ODEs. By clever cancellation, error can be decreased by orders of magnitude.

Log in or register to write something here or to contact authors.