More commonly known as the Newton-Raphson method (at least in the multivariate case, where it also has rapid convergence in many cases), this method follows the tangent to the curve of the function to the X axis, and uses that as the next point.

See the Newton-Raphson method for more details.


It is inaccurate to suggest (but some people do just that) that we need iteration methods only when we have no exact algebraic formula. Especially since your computer uses the Babylonian square root algorithm, a special case of this method, to calculate the function sqrt.

Derivation of Newton's Method:

Given a point xn on a graph of function f, we can create a line tangent to this point like so:

g(x) = f(xn) + f'(xn)(x - xn)

The root of g(x) is found like this:

0 = f(xn) + f'(xn)(x - xn)

0 = f(xn) + f'(xn)x - f'(xn)xn

f'(xn)x = f'(xn)xn - f(xn)

    f'(xn)xn - f(xn)
x = ----------------
         f'(xn)

         f(xn)
x = xn - ------
         f'(xn)

Because this root approximates the nearest root of f(x), we can say that

         f(xn)
x{n+1} = ------
         f'(xn)

where initial value x0 is a number close to the root we want to find. xn will (hopefully) make progressively better approximations of the actual root of g(x).

Newton's method also has a quadratic version, akin to it in the same way that Mueller's Method is akin to the "regula falsi" method. I derived it myself, I don't know if it has a name already.

Given a point xn on the graph of function g, we can create a parabola tangent to this point like so:

                               1              2
g(x) = f(xn) + f'(xn)(x-xn) +  - f''(xn)(x-xn)
                               2

The roots of g(x) can be found using the quadratic formula like this:
Distribute and collect like terms:

    1         2
0 = - f''(xn)x
    2

    + (f'(xn) - f''(xn)xn)x

                         1          2
    + f(xn) - f'(xn)xn + - f''(xn)xn
                         2

And now wantonly plug into the quadratic formula. This is so horrendously messy that I cannot legibly write it without MathML or some other such, so that is an exercise left to the reader. Anyways, it's really fun because it all cancels out to look like this, after replacing x with x{n+1} as above:
Method 1
                          ______________________
                         /      2
              f'(x) -+ \/ f'(xn) - 2f(xn)f''(xn)
x{n+1} = xn - ----------------------------------
                          f''(xn)

This looks remarkably like the quadratic formula! Let's play around with it a bit more, namely distributing the denominator into the radical, and as above, replacing x with x{n+1}:
Method 2
                             ______________________
              f'(xn)        /   f'(xn)   2  2f(xn)
x{n+1} = xn - ------- +- \  | ( ------- ) - -------
              f''(xn)     \/    f''(xn)     f''(xn)

Now this looks similar to Newton's Method of old!

Yet the question still remains: which root should we go with? Here's a chart, showing the sign of the radical for each method as a function of the signs of the first two derivatives:

f'(x)    + + - -
f''(x)   + - + -
Method 1 - - + +
Method 2 + - - +

Note that method 1 is not dependant on the sign of f''(x) because the radical is divided by the same, thus affecting the sign change automatically.

This method converges about twice as fast as the standard Newton's Method, but takes about twice as much work. Unlike Newton's Method, however, it does converge for cbrt(x) and the like, though somewhat slowly.

It may be important (if you're actually doing something besides your math homework with this iterative method) to know what kind of error you're generating by using such approximations. Taking a slightly different approach from that of PiGuy, look at Newton's method as derived from Taylor Polynomial expansion:

Suppose that f (and its first and second derivative) are continuous on the interval [a,b] Let x' (an arbitrary value on the given interval) be a guess of what p (the root of our function f) might be, with the stipulation that f'(x') != 0, and that the difference between x' and the actual root p is "small" (ahh math terms). The first Taylor Polynomial of f(x) about x' is:

f(x) = f(x') + (x-x')f'(x')+((x-x')²/2!)f''(þ(x))
Where þ(x) is somewhere between x and x'. Since f(p) = 0 (that's the solution we're looking for, so it is true by definition), evaluating the above equation with x = p gives:
f(p) = 0 = f(x') + (p-x')f'(x')+((p-x')²/2!)f''(þ(p))
Newton's Method assumes that (since the difference between x' and p is small) |p-x'| is small and therefore (p-x')² is a lot smaller. (A number less than one, when squared, becomes smaller.) Thus that term is dropped and you end up with:
0 (approximately)= f(x') + (p-x')f'(x)
And you can see the rest of that in PiGuy's writeup.

This WU is concerned with the term that was dropped. It is, in its entirety:

((p-x')²/2!)f''(þ(x'))
"That's not of much use," you might be thinking, "You don't know what þ(x') is, not to mention x'."

You would be correct in thinking that it is unknown, but incorrect in thinking that it's not useful. Given the function's second derivative (f''(x)) and the interval over which it is to be evaluated, it is possible to find the maximum value of that second derivative on the interval. Even if you can't get the third derivative to do critical point analysis, you could still brute force the numbers out of it computationally to find an approximate maximum. So now we have that part of the term. Finally, there are a couple of ways to find (p-x'), but one is to take the worst-case scenario (half the width of the available interval) and plug that in. You can plug in the minimums of these values as well and find your error bounds.

Log in or register to write something here or to contact authors.