To calculate derivatives by finite differences is both simple and subtle, like much of numerical analysis. Simple, because it can be done with an AP Calculus knowledge of Taylor series, and subtle, because to do it well takes some thought beyond mechanical manipulation.

Here's an example.

Suppose you have data for y(x) at a bunch of x's, and you want to find y'(x). Begin by writing out a Taylor series around x:

y(x+h) = y(x) + y'(x) h + ...

Solving for y'(x), you quickly get

y'(x) = ( y(x+h) - y(x) ) / h

This is known as forward differencing. Easy enough, right? But wait. What you actually did was

y(x+h) = y(x) + y'(x) h + O(h^2)
y'(x) = ( y(x+h) - y(x) ) / h + O(h)

where O(h) means you left out terms of order h. So using this formula to calculate y' will give you an error that scales like h--to halve your error, you have to halve h. It turns out it's possible to do much better. Instead of stopping at h, go to h^2.

y(x+h) = y(x) + y'(x) h + y''(x) h2/2 + O(h^3)
y(x-h) = y(x) - y'(x) h + y''(x) h2/2 + O(h^3)

If you subtract the second equation from the first, what's left is

y(x+h) - y(x-h) = 2 y'(x) h + O(h3)

Solving for y' gives

y'(x) = ( y(x+h) - y(x-h) + O(h3) )/ 2h = ( y(x+h) - y(x-h) ) / 2h + O(h2)

This is known as central differencing. Notice that the error is now h2--the O(h) errors cancelled each other out! Now, halving your step size will reduce the error by a factor of four.

This basic example of increasing the accuracy of a numerical derivative mirrors to some extent the ideas behind more sophisticated algorithms such as the Runge-Kutta scheme for numerical integration of ODEs. By clever cancellation, error can be decreased by orders of magnitude.