The first million digits of pi were calculated in 1973 using a CDC 7600. The calculation took 23 hours, and was produced using a fairly common arctangent series.
The problem with the arctangent algorithm, however, was that it had an unsuitably high growth rate, on the order of n-squared -- in other words, computing twice as many digits of pi as you had so far would take four times as long. In 1976 Eugene Salamin rediscovered a formula used by Gauss which was more computation-intensive, but had a much slower growth rate.
Using this new formula with the increasingly-available supercomputers, the first million digits of pi was a snap. Pi was expanded to 2 million digits in 1981, 16 million digits in 1982, and the first billion digits of pi were computed and verified in 1989.