We currently have a lot of narrow AIs -- that is, computer programs that have limited ability to adjust their responses to the environment. Someday we will have programs that can self-modify to improve their problem-solving ability, and then something approximating artificial general intelligence (AGI), i.e., a program that can not only pass as human but can actually do all of the things that humans can do. Very shortly after that, we can expect a superintelligence, a program that has cognitive abilities and real-world influence beyond the range that is predictable by humans.

'Intelligence' is a somewhat misleading term in this case, as humans tend to measure intelligence through IQ. However, human IQ scales tend to have some very human-centric components; for example, most of them measure memory in various capacities, which is something that simply does not apply to computers; every computer can max out any memory test dealing with any information it can process. Since IQ scales tend to add component scores together, a computer might theoretically score above human intelligence simply by having sky-high scores on memory subscales, despite utterly failing other parts of the test.

The flip side is also an issue: it is not clear that everything included on human IQ tests is necessarily relevant to computer intelligences. We certainly are working hard to develop computers with spatial reasoning abilities on par with humans, but this has turned out to be pretty difficult; does that mean that a computer that cannot replicate a pattern of blocks is of lower intelligence than a human that can? Likewise, tests often include measures of common sense reasoning that assume a shared culture.1 In this case, "common sense" is very much not a matter of intelligence unless you were raised in that shared culture.

When dealing with non-human entities, intelligence becomes a nearly nonsensical term. The traditional solution to this is to simply assume that intelligence means human intelligence. The Turing test is based on being able to imitate a human, and the criteria for artificial general intelligence is based on being able to do all tasks than a human can do. This is useful in that it gives us clear benchmarks, ones which we can easily see have not been met. However, this is not useful in determining AI risk, AI usefulness, or anything having to do with applied intelligence.

This is immediately obvious if we consider the history of computers; we have not judged the value of computers on how much they were like humans, but rather the amount and permanence of their memory, the speed at which they function, and their portability... among many other things. In these areas computers have consistently surpassed themselves over time, and now easily surpass all humans -- you cannot remember as much as your computer, for as long as your computer; you cannot do the operations that your computer does at the speed your computer does, and you are considerably larger and more clumsy than a Smartphone. While a computer with human levels of common sense would be a valuable thing, most of us already prefer a marginal computer over a marginal human.

As computers continue to develop into forms of intelligence ever more more divergent from humans, the question of superintelligence becomes more nebulous. We can easily list some aspects of intelligence that computers -- AIs or otherwise -- currently have in excess to us:

Likewise, we can fairly easily list a number of areas in which AI is currently lacking:2

What is not easy is to determine when this odd sort of intelligence becomes a concern -- both because we have no real concept of how relatively 'strong' this sort of IQ is, and because we have no idea what sort of actions it might take, given agency. Nor do we understand what agency is in this context3, as control over environment and goal-directed action are not very meaningful when we only very vaguely share the same environment and goals.

Perhaps the only functional definition of a superintelligence is that of a being that has more control over the world than do humans. This is a fairly high bar, but quite possibly the correct one. Our primary concern with superintelligences is that they have the potential to be highly powerful. The power that we are concerned with is not the ability to calculate pi to extreme lengths, but to design nanobots, viruses, and corporations. As humans can already do many of these things, and expect to be able to do more of them in the future, a being with like-human intelligence -- but more of it -- might be able to supercede us in these areas. It is also possible that a being with non-human-like intelligence can do it much better.

We currently have no examples of superintelligences, but we do have a very well-studied example of an algorithm-directed developmental process effectively taking over its environment. A fairly small set of complex chemicals have, without direction or support, developed DNA, jellyfish, elephants, and us. In order to solve problems that we do not intuitively see, this process has developed peacock tails, 350,000 species of beetle, and viruses. In inventing us, it also gave us our rather spurious experience of the color red, a predilection towards religion, and the offside rule. We have no good reason to believe that a directed process would be any less weird.

It is usual, and dangerous, to believe that we have mapped out the general range of what computers can do, what intelligent beings want, and what thinking is. It is important to remember that intelligence is not a simple physical system like lepton interactions, but a complex emergent system more akin to protein evolution; the end result is complex, unpredictable, and highly dependent on environment. It is no easier to predict the form of a superintelligence, a priori, than it is to predict an ape turning dirt and rocks into televisions.



Footnotes:

1. For example, one common IQ test has a Picture Completion scale in which subjects are asked to look at a series of pictures and identify what is missing. Some of these we might reasonably hope that an AI could complete; for example, a picture of a pitcher, tipped enough to pour water into a cup, and a cup partially filled with water -- but missing the stream of water pouring from the pitcher to the cup. Granted, the AI would have to decode the simple line drawings, hypothesize what the objects are and what the wavy lines mean, and guess that the problem is one of physics simulation. Not easy, but something that is reasonable to hope for.

Compare this to the drawing of a rabbit, missing its tail. In this case, the AI would have to compare this rabbit to a prototypical rabbit that is not like real rabbits. That is, a Google image search of rabbits finds that it is common for rabbits to have small tails that are neatly tucked away, not visible to the camera. Given a list of things that the rabbit in the drawing is missing (fur, nostrils, a vascular system, extension into the third dimension), it is perhaps unreasonable to require that a subject guess something that most rabbits do not actually have -- a big, puffy tail.

2. It is tempting to add creative thinking to this list, but it is becoming apparent that computers are good at finding edge instantiations, thinking deeply into complex problems, and exhausting multiple combinations. It's not entirely clear what is missing that is needed for "creative thinking". One possibility is that creative thinking requires not only seeing many options and picking out new ones, but also making intuitive leaps beyond what is logically suggested by the available data. If this is the case, it only remains to distinguish it from insanity, and we can indeed list it as a cognitive skill.

3. Agency is a tricky subject. We do not have a good explanatory model of human consciousness, but we are certain that it is an important part of how we decide upon and modify our goals. We do have a fairly good idea of how computers, including AIs, choose their goals (we program them). Current agency issues in AI have been explored in concepts such as hard takeoff, edge instantiation, and treacherous turn, but these assume that AIs continue to operate under the programing given to them by us. If consciousness is an emergent property that leads to unpredictable behaviors, we also have to wonder if there might be other things like consciousness that we do not have, and that might develop in a system that is obviously very unlike us in many ways.

Log in or register to write something here or to contact authors.