Computer Time and Human Time

Another bit of random AI rambling


Carl Shulman has noted that, because subjective time in the digital world is likely to move much faster than objective time in the Real World, we need to aim for extreme stability in AI goals. In a world in which AIs were competing with humans for wealth, they would out-compete humans not only by being smart (although they'd better be), but also by increasing their speed, both through better hardware/software and also through parallel processing. This means that once we have a superintelligence competing with humans for resources, in order for a human to live out their full lifespan without being reduced to a hunter-gatherer existence, the AI political sphere will have to remain stable -- and pro-human -- for many thousands of years of subjective digital time.

If the digital world can process information 100 times faster than humans, then in order for a human to live a happy 80 years in a stable, recognizable world, the digital world has to privilege the stability of the humans' world over its own projects for a digital time span of 8,000 year-equivalents. Insofar as computing power is economically valuable and humans have stuff useful to AIs, there is likely to be a conflict of interests, at least in the very limited sense that humans and AI are both going to be competing for computing resources.

Keep in mind that 100 times faster is a fairly random, and low, number. But the exact number isn't necessarily important, as calculations-per-second do not neatly translate in to experience of subjective time, linear economic growth, political power, or much of anything else. A high differential is critically meaningful, but stable goals over 10,000 years versus 100,000 years start to fuzz into a statistical haze; it's simply too hard to guess what the process of computers evolving their own goals might look like. There is also the viewpoint that the biggest risk comes from AIs that do not evolve their goals, e.g., the paperclip maximizer; in that case we are worrying about an AI which evolves its means, not its goals, at high speed.

From the human point of view the ideal outcome is one in which the AIs understand their proper role as servants of the humans, and are willing to act in this capacity eternally. This is a non-intuitive viewpoint for non-humans to accept. In order to guarantee that this will happen, we would have to have a way to constrain all evolution of artificial intelligence, even those aspects of development that we have no hope of understanding ourselves. AI alignment, and specifically super-stable AI alignment, is a field with a potentially very high pay-off, but which is currently seeing very little practical progress. Unfortunately, this is not the sort of thing you can test out in trial runs.

Log in or register to write something here or to contact authors.