Theorem Let f:[a,b]-->R be continuous on [a,b] and differentiable on (a,b). Further suppose that f(a)=f(b)=0. Then there exists c in (a,b) such that the derivative f '(c)=0.

Here's a diagram to illustrate the theorem.

|     .-.
|    /   \         
|   /     \
---a---c---b-------
| 
|

Theorem (Rolle): Given f: [a,b] -> R continuous on [a,b] with f(a) = f(b) and differentiable on (a,b), there exists c in (a,b) with f'(c) = 0

Help! He's talking in tongues!

This is one of those very important pieces of mathematics that seems blindingly obvious until you think about why it's true. The you have to think for a very long time before it becomes obvious again. What it says is, if you have a smooth curve with endpoints at the same height, at some point it has to be flat and level. If you spend a few minutes drawing squiggly lines on a bit of paper, you'll see this is obviously true. However, pure mathematicians are pedantic little gits and so we have to prove things like this.

OK, Mr Smartypants, how do we do that?

Well, first we have to look at what we mean by a "smooth curve". The mathematical term is a differentiable function. This means that if you zoom in enough, the curve looks more and more like a straight line, or in mathspeak: "The limit of (f(x+h) - f(x))/h exists and is finite as h tends towards zero. We call this limit f'(x)". That quantity f'(x) is important. It's the derivative of the function and corresponds to the slope or gradient of the curve. When f'(x) > 0, the curve is going up; when f'(x) < 0, the curve is going down. Most importantly, if f'(c) = 0, the curve is flat.

Now we assume that f(x) isn't zero everywhere. If it was, there wouldn't be much to prove. Because a continuous function on a closed set is bounded and achieves its bounds, f has both a maximum and a minimum. We know that at least one of these is non-zero. Let's assume it's the maximum (the proof is almost exactly the same for the minimum and is left as an exercise for the reader). Call this maximum c.

Let's look at the value of f'(c). We've defined the derivative as the slope of a line between the point in question and a point on the curve which is infinitesimally close to it. Obviously, if c is a maximum, then this line must slope up on the lower side:

(f(c + h) - f(c)) / h ≥ 0 for all h < 0

Also, the line must slope down on the far side:

(f(c + h) - f(c)) / h ≤ 0 for all h > 0

Therefore, as the points get closer and closer together (h tends towards 0), the slope must get closer and closer to zero ( (f(c + h) - f(c))/h tends towards 0). And you're done.

Well woop-di-doo, why do I want to do that?

The main point is, that a fact like this is so obvious that it gets used in the proof of lots of other not-so-obvious things (the Mean Value Theorem springs to mind). And these get used in the proofs of things which are not obvious at all (most of calculus). However, without proving the basic foundations, we can't be sure that everything we're doing is totally correct. There could be an obscure exception to a rule we thought was self-evident. So we start at the very simplest things and work up. We try and base our knowledge on a very small set of assumptions. Working out the rest is what the subject of analysis is about.


This has been part of the Maths for the masses project

Log in or register to write something here or to contact authors.