There are two flavours of technological singularity.

The first – the "slow take off" is simply an extrapolation of trends: new waves of technology, which in the Palaeolithic arrived every 1000 years or so, began in last few 1000 years to arrive every 100 years or so. In the late 1800s they were coming every decade. Now the new model PC, OS, web browser, mass-storage medium comes every year. The singularity is when the rate of change becomes literally inhuman. When new technologies can come to fruition in a lab around the world from you while you sleep, does it really matter if they took 8 nanoseconds or 8 hours? Either way the world has changed when you blink awake. Either way making a prediction of what the world will be like a year later is impossible.

Note that mathematically speaking, even if the graph of change by time is of the form change = 2 time, then it does get very high and very steep past a certain point; but it is never infinite or vertical. But infinite change in zero time is not needed to make it seem like an impenetrable discontinuity to us.

Part of this accelerating rate of change is the self-feeding nature of technological advances. New technologies are used to help make newer ones. Writing. printing press. CAD. Simulation. Online hypertext databases. Each new generation of computer is designed using the last generation. Silicon may well be superseded – Ray Kurzweil gives good circumstantial evidence that it will, given other computational medium's track records.

The ultimate extrapolation of this is when progress is so fast that people can't do it even with help. Rather than abandon the graph, optimists have concluded that this means technologies designing other ones without human intervention. Strong AI. This then is the second kind, the hard singularity – when new generations of technology come about from Fast Folk, who think thousands of times faster then us, designing their own successors on their own ever-increasing ratchet. Like a phase transition, all prior bets are off, even the continued desirability of the physical survival of the human race. Draw the lines out far enough, and strong AI is predicted.

But every exponential graph in the real world is really the early part of an s-curve. There are natural limits to everything, and when you get near them you will slow down and get only incrementally smaller gains.

So Moore's observation cannot hold true indefinitely. You can improve the speed, space and energy requirement of a computer only up to a point. There are physical limits to the speed of computation: You can't arrange matter in locations smaller than the Planck length. You can’t send a signal faster than light. You can't crunch a number without expending some energy and giving off some waste heat. Too much computation too close together means too much waste heat. The solution, to spread things out, means too much communications lag.

I recall that New Scientist and Kurzweil have done the math, and concluded that it is in fact possible that Kurzweil’s jaw-dropping claim is doable before the limits are reached: that before the year 2100, a machine with the computational complexity of all 4 billion human brains on the planet, put together, can be yours for $1

No one seriously believes that a pile of neurons the size of a human brain will automatically be conscious like a human is. No one seriously believes that a computer of certain complexity will automatically be conscious. Hey, we know that the blasted things won’t even boot up without added software. The optimists assume that when the hardware is ready, the software will soon follow. They see it as hard work, but work that can and will be done.

We don’t even know what bare minimum level of processing power in necessary for an AI to run in real-time. The thing is, it doesn’t really matter if we get that exactly right or not. If we underestimate by half, or use an inefficient technique that doubles the hardware requirements, this will not double the time until it is doable, but just add a year or two onto it.

Our hardware is getting better, so is our software, and our understanding of the brain. Most of our tentative steps towards AI have been limited successes or outright failures, but progress is being made. See cyc for instance. There are decades to go yet, and the great work is already underway.

It’s not proven that since the human brain has an architecture that inspired neural nets, therefore our first AI must have a neural net architecture. After all, good chess-playing programs are known to play chess in manner completely unlike humans. And it is one that relies far more on brute force study of all the possibilities. This approach is very heavy on raw processing power.

Perhaps the raw power for basic AI research is just not here yet and a turning point will be reached. Perhaps, other options failing, when we have enough power to faithfully emulate brain-like structures in the necessary detail, we will use this to jury-rig an AI, which can then be run faster and larger to design a more elegant next generation. I see no logical flaw in this approach.

Perhaps not, and progress in AI software will continue to be slow and hard-won over 100s of years, not coming to fruition in decades as the optimists would like to think. Lack of AI would put an upper limit on technological change at the rate that humans can push it.

If it does happen though, then predicting what the world will be like past a point we call "the singularity" is impossible until we get there.