Because of the impracticality of Gaussian elimination as an algorithm for solving large sparse systems of linear equations (Ax=b), iterative methods have been studied widely by mathematicians and computer scientists.

The general framework of an iterative framework is simple: First, generate a guess for the answer x0. Then using this guess come up with a new (hopefully better) guess x1, using some algorithm. Then run the algorithm again using x1 as your old guess. Eventually, you will get the right answer.

Obviously, there are a lot of questions that need to be answered: what should my initial guess be? What algorithm should I use? Will my new answers actually converge to the "real" answer, and if so, will they converge fast enough so as to be better than Gaussian Elimination?

A lot of work has gone into this field, and people have proven that for certain algorithms and certain types of A (ie if A is symmetric and positive definite, or at least non-singular), the answers will actually converge.

Some classical iterative methods are Jacobi's Method, Gauss-Seidel, and Richardson's Method. The method of choice for modern numerical computing applications seem to be the Krylov subspace methods.

If you are interested in this topic (and are brave) you might want to get the book "Iterative methods for sparse linear systems" by Youssef Saad. The whole book is available online at

Log in or register to write something here or to contact authors.