One theory of what the future may bring.

It is a rather simple idea. At some point, a computer with the intelligence equal to the human mind will be created. Now, if you teach this computer how to design the next generation of a computer, it will do it in the time equal to a human. Currently, it takes about eighteen months to double the speed of a computer.

Now, once we double after this point, we teach the new computer how to design. As it works twice as fast, it will only take half the time to design the next one. And teach this new one, and it will only take one-quarter the time. And so on. Eventually, the speed at which each successive generation of computer is designed will reach a singularity - to visualize this, view the graph of the function 2x. As x increases, the curve grows faster and faster, almost reaching vertical. This may represent the speed of increase.

Of course, all this power could easily go into improving other technology also, causing technology to possibly change so fast that it is impossible to comprehend what things would be like.

Now whether this curve will keep accelerating, or eventually level off is unknown. But either way, it suggests a period of time where the very nature of life as it is experienced will change completely, with what lies on the other side being something that cannot be known until we're there.

And what makes this the most interesting, is the timeline given for this singularity, if it occurs, to happen. If the progression in various fields of technology are graphed, including various indicators of computing power and miniaturization, it appears to converge between 2015 - 2050, depending on the estimates used. The near future. Things might just be about to get wild.

See: Vernor Vinge on the Singularity for more details

The theory of technological singularity was actually proposed by Vernor Vinge in an essay available on the Internet, and used in his books The Peace War. It proposes that, at some point in the future, technology will reach a point where it advances not simply from month to month, but from minute to minute--and literally, if you blink, you miss the end of the world.

Of course, whether or not this will actually happen is anybody's guess.

The notion of a pending technological singularity, first explicitly named as such by Vernor Vinge, is a modified version of the traditional Abrahamic eschatology. Though it removes the notion of a supreme being judging all those who have lived, it does promise believers a moment in time beyond which all will live in perpetual paradise; immortality, wealth, leisure, sexual prowess and scholarly success are explicitly promised to believers and, in some versions of the meme, denied to others. The primary proponents of the Singularity belief are Raymond Kurzweil, Vernor Vinge, Damien Broderick, Hans Moravec and Eliezer Yudkowsky. Each presents a somewhat different vision of this, but they are all organized about a common core.

The central tenet of the Singularity meme is the ever expanding power of computer processors. Adherents point to charts indicating that the number of calculations per second per $1000 has been doubling every 2 years since the early 1890s and the Hollerith tabulating machine. From human computers doing arithmetic for a $1 a day through punchcards and vacuum tubes to transistors and integrated circuits and on to DNA computers and quantum computing. They say that every time a physical limit that would stop the exponential curve is approached a new medium is discovered that allows the curve to continue unchecked. At some time in the (not-too-distant) future this curve goes vertical--ever expanding computational capacity will result in ever more powerful technologies being assimilated into society ever faster, the entire cycle spinning so fast that we will no longer be able to project what is going to happen next. This point of no return, when the pace of technological progress goes infinite is the Technological Singularity.

While each of the main visions of the Singularity agree on the above the point, they do exhibit significant differences. They all differ on the specific technological features of the event and they all envision significantly different worlds arising from the event.

Vernor Vinge

The most moderate of the Singulatarians, Vinge is credited with the creation of the idea in his novel Marooned in Realtime. He later expanded upon it in a paper delivered to a NASA conference in 1993. He argued that at some point in the early 21st century human beings will create computer software that is more intelligent than we are. Shortly thereafter this software will produce software smarter than it. The human era will be over.

Vinge refrains, for the most part, from speculation as to the structure of civilization following the singularity.

Damien Broderick

Broderick calls his Singularity The Spike, and has written a book by that title. He doesn't focus on a particular technology, but contends that the whole of science and knowledge are growing at an exponential rate and that we will see unimaginable advances in every area of our lives. He sees a rising tide for all of humanity as nanotechnology, artificial intelligence, and advanced medical technology alleviate all of humanity's misery.

Broderick's view is the most overtly utopic, promising immortality through nanotechnology to everyone and beneficent artificial intelligences acting as stewards of our race, managing everything from traffic to resource production. Everyone will have the choice of living in a normal human body, animating a custom biological or robotic shell, or existing as an uploaded mind in a global computer network. Switching between the three will be easy and, given the overall wealth of the post-Spike world, cheap.

Raymond Kurzweil

Kurzweil's vision of the future has been set forth in books such as The Age of Intelligent Machines, The Age of Spiritual Machines, and the forthcoming The Singularity is Coming. He envisions a strong singularity catalyzed by advances in our ability to simulate the activity of the human brain. As functional magnetic resonance imaging becomes more powerful he expects that we will completely reverse engineer the functioning of the brain. After that is complete we will be able to port our consciousness to any hardware that we wanted, thus achieving immortality.

Like Broderick, Kurzweil sees a soft take-off Singularity. Rather than a discontinuous, single-day change he sees a process that will stretch through the whole of the 21st century (although this is still abrupt by the standards of change in our global civilazation). Software will match human intelligence and ability in first one area, then another and another. People will interact primarily in virtual environments, even when several people are gathered physically together the advanced display technology they use will render most of their experience virtual. Medical technology will progress to the point that people can keep their bodies alive indefinitely, but even if something terrible, something unfixable, happens to that body it won't matter. Uploading technology will be so good that immortality is all but guaranteed.

Hans Moravec

A researcher at Carnegie Mellon University, Moravec believes that the Singularity will come about through the creation of advanced, autonomous robots. He envisions a hard take-off Singularity where greater-than-human level intelligence combined with a limitless ability to manipulate the physical world through sophisticated robots will completely supercede humanity. Moravec displays a strong believers only bias in his Singularity scenario as it will be possible for those with the foresight to have seen it coming to have ported themselves to new hardware that can compete with the computers.

Eliezer Yudkowsky

Founder of the Singularity Institute and author of Coding a Transhuman AI Yudkowsky has raised the Singularity to the level of pure religious object. He believes in a particularly harsh version of the hard take-off, postulating that human-level intelligence will rapidly bootstrap to something much, much greater. This sudden increase in intelligence will give the growing AI the ability to create strong nanotech if it doesn't already have it. Our continued existence will be completely at the whim of this creature, and we are unable in principle to determine what rules will apply after its arrival.

There is a strong theme of the end of humanity in Yudkowsky's writings. He makes the argument that a more intelligent entity will be better able to determine what is good and moral than we humans are since, of course, it is more intelligent. If such an entity claimed that the proper thing to do with humans was to destroy all of us then we should let it. After all, the fact that we not only don't, but literally can't understand the reasons for it shouldn't be relevant to whether such a genocide is the right thing.

Given that he views it as unavoidable, Yudkowsky has devoted himself to making it happen as soon as possible. He is trying to create a bootstrapping artificial intelligence and to thus trigger his Singularity.

Bill Joy

Bill Joy is the most visible of the Singularity detractors. While he believes that it is coming, he doesn't like it and wants it stopped. He argued this point in a 2000 Wired magazine article, Why the Future Doesn't Need Us? He believes, much like Yudkowsky, that coming technological changes are going to result in the end of humanity. To avoid this Joy has, in effect, proposed the creation of an enormous police state charged with preventing advances in certain fields of science by whatever means are necessary.

While his written proposals fall short of explicitly sanctioning a police state and restrictions upon the freedoms of thought and speech, it is easy to see that this is what he proposes. He calls for "relinquishment" of robotic, biological, and nanoscale technologies and research on a global basis are unrealizable in the absence of a global police state or in the presence of freedoms of speech, press, or assembly (essentially in the presence of the freedom to communicate). For Joy's program to be successful it is not enough that the majority of the world's people cease research and development in these areas. If even a small number of people continue to work in these fields the breakthroughs in our knowledge that Joy fears so greatly will come. To implement Joy's relinquishment strategy will require that these people be identified and prevented from performing their research.

Like nuclear non-proliferation, this is a policy that is guaranteed to fail eventually. Once the apple of knowledge has been bitten no amount of brutality or repression will be able to undo the effects. Even in the most brutal police state imaginable relinquishment is an impossible goal, and to advocate or pursue it is to endorse the destruction of countless lives for no long-term benefit.

A major problem with this belief is that it relies on the existence of a program capable of designing computers that is at least as good as human engineers. We have no such program now, and we probably could not build one even if we tried. It would be a mammoth task, requiring a massive research program and a far better knowledge of the workings of the human mind and the scientific principles of computer design than we currently have. We may be able to build better computers, but an algorithm that builds better computers is a far more daunting task. It takes years of research and scientific breakthroughs in many fields to make a new computer chip, not just a simple improvement in the old design. In principle, it may be possible, but the predicitions that give a 50-year timeline based purely on Moore's law are flawed, as they pay no heed to whether or not we will have the software to use this mammoth power.

Also, this belief assumes - wrongly - that a computer with as many transistors as the human brain has neurons will automatically be as intelligent as a human. The prediction of Moore's Law is simply that the number of transistors on a chip will double every 18 months; not the processing power, not the intelligence, but the number of switches. We can only use this extra power to run our programs faster, and run larger programs. Real intelligence will probably take not just speed and high technology but a fundamental change in the type of machine we make, probably to a neural net based device.

Finally, there is no guarantee that Moore's Law will hold. Its original form had the time for doubling as 1 year; this was accurate until the 70's, when it was revised to 18 months. Change in the rate is therefore not without precedent; it may slow again, or even stop entirely, thus invalidating the current 30-50 year predictions for the "singularity" being reached.

Update: I'd like to point out that I am very much of the opinion that AI is possible; my point of contention with the "technology singularity" idea is that it seems to assume that computers will magically be as intelligent as humans when they possess the same processing power. My major objection to this is that we will need to know hoe to effectively apply that power, and that knowledge may elude us for far longer than the necessary computing power.

There are two flavours of technological singularity.

The first – the "slow take off" is simply an extrapolation of trends: new waves of technology, which in the Palaeolithic arrived every 1000 years or so, began in last few 1000 years to arrive every 100 years or so. In the late 1800s they were coming every decade. Now the new model PC, OS, web browser, mass-storage medium comes every year. The singularity is when the rate of change becomes literally inhuman. When new technologies can come to fruition in a lab around the world from you while you sleep, does it really matter if they took 8 nanoseconds or 8 hours? Either way the world has changed when you blink awake. Either way making a prediction of what the world will be like a year later is impossible.

Note that mathematically speaking, even if the graph of change by time is of the form change = 2 time, then it does get very high and very steep past a certain point; but it is never infinite or vertical. But infinite change in zero time is not needed to make it seem like an impenetrable discontinuity to us.

Part of this accelerating rate of change is the self-feeding nature of technological advances. New technologies are used to help make newer ones. Writing. printing press. CAD. Simulation. Online hypertext databases. Each new generation of computer is designed using the last generation. Silicon may well be superseded – Ray Kurzweil gives good circumstantial evidence that it will, given other computational medium's track records.

The ultimate extrapolation of this is when progress is so fast that people can't do it even with help. Rather than abandon the graph, optimists have concluded that this means technologies designing other ones without human intervention. Strong AI. This then is the second kind, the hard singularity – when new generations of technology come about from Fast Folk, who think thousands of times faster then us, designing their own successors on their own ever-increasing ratchet. Like a phase transition, all prior bets are off, even the continued desirability of the physical survival of the human race. Draw the lines out far enough, and strong AI is predicted.

But every exponential graph in the real world is really the early part of an s-curve. There are natural limits to everything, and when you get near them you will slow down and get only incrementally smaller gains.

So Moore's observation cannot hold true indefinitely. You can improve the speed, space and energy requirement of a computer only up to a point. There are physical limits to the speed of computation: You can't arrange matter in locations smaller than the Planck length. You can’t send a signal faster than light. You can't crunch a number without expending some energy and giving off some waste heat. Too much computation too close together means too much waste heat. The solution, to spread things out, means too much communications lag.

I recall that New Scientist and Kurzweil have done the math, and concluded that it is in fact possible that Kurzweil’s jaw-dropping claim is doable before the limits are reached: that before the year 2100, a machine with the computational complexity of all 4 billion human brains on the planet, put together, can be yours for $1

No one seriously believes that a pile of neurons the size of a human brain will automatically be conscious like a human is. No one seriously believes that a computer of certain complexity will automatically be conscious. Hey, we know that the blasted things won’t even boot up without added software. The optimists assume that when the hardware is ready, the software will soon follow. They see it as hard work, but work that can and will be done.

We don’t even know what bare minimum level of processing power in necessary for an AI to run in real-time. The thing is, it doesn’t really matter if we get that exactly right or not. If we underestimate by half, or use an inefficient technique that doubles the hardware requirements, this will not double the time until it is doable, but just add a year or two onto it.

Our hardware is getting better, so is our software, and our understanding of the brain. Most of our tentative steps towards AI have been limited successes or outright failures, but progress is being made. See cyc for instance. There are decades to go yet, and the great work is already underway.

It’s not proven that since the human brain has an architecture that inspired neural nets, therefore our first AI must have a neural net architecture. After all, good chess-playing programs are known to play chess in manner completely unlike humans. And it is one that relies far more on brute force study of all the possibilities. This approach is very heavy on raw processing power.

Perhaps the raw power for basic AI research is just not here yet and a turning point will be reached. Perhaps, other options failing, when we have enough power to faithfully emulate brain-like structures in the necessary detail, we will use this to jury-rig an AI, which can then be run faster and larger to design a more elegant next generation. I see no logical flaw in this approach.

Perhaps not, and progress in AI software will continue to be slow and hard-won over 100s of years, not coming to fruition in decades as the optimists would like to think. Lack of AI would put an upper limit on technological change at the rate that humans can push it.

If it does happen though, then predicting what the world will be like past a point we call "the singularity" is impossible until we get there.

Log in or register to write something here or to contact authors.