The Next Fifty Years of Computing
A commentary on Peter Denning's book
LEONARDO F. URBANO
The computer revolution takes off in the late 80’s when computers became applicable to multiple applications (particularly business) and were affordable to average-income consumers. In 1997, processor speed, storage space and transfer rates were evolving annually at a rate of 60% and doubling every eighteen months (this figure was calculated by factoring into the equation the rate at which transistor density increases on processors). The prediction that Denning makes is an idealized virtualized society with computers merging seamlessly into people’s lives making use of hi-tech cheap sensors like speech recognizers, synthetic video, GPS, bio-medical sensors, radar, sonar, laser range-finding and more. Most of his claim is derived from the success of the internet as an information superstructure and it’s influence on developing hi-tech sensors so that human-computer interface could be as smooth (and real) as interpersonal contact with another human being.
When computer programs and hardware become sophisticated enough such that a perceiver must think twice (or even once) about the validity of what his senses are telling him we will have reached a cultural point that can be called the Culture of Simulation. That age is no time in the far future. It is now. In the 1980s psychological studies conducted on children revealed a high tendency of children to personify the computer using adjectives like “thinking” and “remembering” to describe its functions. In the 1990s though, children use more appropriate language to describe the functions. The question of living computers come into play and Denning presents a few child reports on their definitions of the living creatures in a Sim game. He doesn’t do a very good job arguing for or against computational algorithms as being alive or not. His error is a result of a too specific sample group of common characteristics of living things. It would more appropriate for him to choose the most basic and general definition of life by calling upon it’s distinct nature from every other inanimate or non-living part of the universe: that life ceases to exist when certain actions are not taken. Life, therefore, is a property that can be attributed to any system which performs self-sustaining action. A computer virus which replicates to keep it’s code on a machine – is alive. A robot that will stop functioning if it does not expose it’s solar power cells to light – is alive.
I really hated this chapter because it exposed Dennings real lack of research about artificial intelligence. I hope he’s not running around giving lectures the way this chapter was written. He says, in regard to the differences between computational and biological cognitive systems, “… one is carefully designed according to well-determined goals and following systematic principles. The other evolves through a unique process that is affected by a wide range of variables, severely path dependent, fundamentally kludgy, difficult to predict, and difficult to emulate,” and he sticks to this distinction throughout the chapter. Neither definition applies to either cognitive system. The human brain is composed of fundamental processing units. Its behavior is both a result of genetics and environment. Neurons are strung together in complex arrays at birth as a result of genetics. They form new connections and interconnections as a result of the environment. An artificially intelligent system’s initial computational configuration (or genetics) is determined not by it’s DNA (although in new research we can simulate neural network structure through blueprint chains) but by it’s corresponding evolver: man. If it’s an adaptive system (which is what a proper neural network is) it will alter itself to conform around some predetermined goal or by a subsumed goal selected for the purposes of accomplishing the predetermined goal. A better way to delineate between human cognition and computational cognition is separate it into three distinct levels of cognition. The first level is sensory and what that means is merely the electrical signals produced from sensory receptor sites at neural terminals. Such information tells a primitive brain that something is. A more complex brain will be able to group multiple sensations and form percepts. Percepts give rise to a perceptual consciousness and afford an animal or computer the ability to tell what something is. Most animals have a perceptual consciousness. Only man is known, for certain, to possess a conceptual consciousness. A conceptual consciousness integrates multiple percepts into concepts. This allows man to extend the range of consciousness beyond what is merely before him and into what could be before him. He can visualize and anticipate, analyze and draw patterns, record and postulate – all complex behaviors afforded by a conceptual consciousness. I don’t think AI research today is as dedicated to figuring out the process of thought so much as it is dedicated to figuring out the application of preexisting thought processes. A good AI research plan would follow the layers I described – they can all be modeled using the most fundamental and simple components: the neuron – be it organic or algorithmic.
Denning really begins to lose credibility by continuing to draw conclusions after merely scratching the surface of commonplace misconceptions about the nature of consciousness and the concept of “thought.” He likes to drop the context of most of his discussions and frequently substitutes a scientific (which is a field-specific technical analysis) for some metaphysical absolute. Man has a basic choice (his most fundamental choice) to think or to evade that effort. Thinking is not an automatic and passive state, as Denning describes on p119. It is an active and focused consciousness controlled by volition. He sounds like he wrote this book right after he took a high school biology class. It gets irritating when he muses himself with this baseless and divergent gedankenexperiments into the possibility of a really sophisticated consciousness (which he does not define nor care enough to research for the purposes of his book which is already outdated).