Fixed point iteration is a simple method for solving the root of an equation. That is to say, it's a bad method to use for solving the root of an equation.

Given any function f(x) = 0, we can find a function g(x) such that x = g(x). Either use algebra to solve for x, or add x to each side.

Ex: x4 - 2x + 9 = 0

    g(x) = x = (x4 + 9)/2

    tan(x) = 0

    g(x) = x = tan(x) + x

Now, to try and find the root of an equation, we first make a guess as to the root. When we plug that value into g(x), it gives us a new value of x! This value of x should be more accurate than your initial guess. If you plug the new guess into g(x), the value you come out with should be even more accurate. Repeat until dead. Or just keep doing it until the estimate is close enough to the actual root.

 x(n+1) = g(xn)

Why is this a bad method to use? Well, for one thing, it will only be converging if |g`(x)| (n+1) is actually getting further away from the real root. This is not good.

Even if |g`(x)| actually is less than 1, that isn't any guarantee that the method will find the root.

Another reason that you shouldn't use this method is because it's slow. Other methods such as the Newton-Raphson method, the "regula falsi" method, or Müller's method work a lot quicker, although they are more complicated than this one.

I suggest avoiding this method if you can. I don't know why they still teach it to us.


Node Your Homework!

Some additional information about fixed-point iteration...

There is a way to find out if this method will converge for a given interval, and also if the point it converges to is the only fixed point on that interval.
The fixed point theorem states:

Let g(x) (continuous on the interval [a,b]) be bounded between a and b, for all x on the interval [a,b]. Suppose additionally that g' exists on (a,b) and that a constant 0 < k < 1 exists with

|g'(x)| <= k, for all x on the interval (a,b)
The, for any number p0 in [a,b], the sequence defined by
pn = g(pn-1), n >= 1
converges to the unique fixed point p in [a,b].
In simpler terms, if the function g(x) evaluated on the interval [a,b] obeys
a <= g(x) <= b
(it's entirely inside the square bounded by x=a, x=b, y=a, y=b) AND the first derivative is between -1 and 1 on that interval, there's a unique fixed point to which the iteration will converge.

Finally, something of note is that if you want to solve for the root of an equation using fixed-point iteration, and you transpose/manipulate the equation just right, to where it has the form g(x) = (function)/(function's derivative), with the same fixed point(s) as your original equation, Voila! You have Newton's Method with very rapid convergence.

Log in or register to write something here or to contact authors.