Sometimes, a system with a rather large level of complexity, such as a very large number of interconnections and rules, that also has some level of self-organization, self-modification, or such, will suddenly start to exhibit behaviors that were totally unpredicted by those people who designed and built the system.

These are knowm as emergent behaviors - behaviors that arose within the system itself, created from the properties within, and their interactions.

This is completely different from a "bug" found in traditional software. Regular software, such as a word processor, operating system, or the like, is a completely ordered set of instructions for a specific purpose. Any unintended behavior is a bug, can be tracked down to a problem in the code, and fixed.

However, when the software is designed to be able to change it's workings, or is designed to learn, then it is not operating in quite the same manner. For example, a neural network isn't completely hard coded in how it works - the basics are code, but the rest comes from inputs and how they are handled. The interconnections between "neurons" in the network increases much faster than the number of neurons, so as the network is made larger and larger, it becomes nearly impossible to track and plan how the network will react.

Some of the more recent networks have suddenly started exhibiting these emergent behaviors, reacting to inputs in totally unexpected ways, and leading some people to think the road to true artificial intelligence and artificial consciousness may lie down this road, to allowing the chaos and uncertainty of huge masses of connections and the like a lot of latitude.

Note that this is not always wanted. For example, a robot being developed to sweep for mines that used a large neutal net to facilitate learning started showing unexpected behaviors. This could be dangerous for such an application.

A more benign example may be the computer game series Creatures, as the "creatures" in the game, the "Norns", were goverened by a neural net "brain", which dealt with sensory input, along with internal input from simulated body systems. The creatures, even though they are based on the same structure for their neural net, begin to act uniquely, handling different input, different events, in a manner unique to each creature. (One for example, discovered it could stick an egg in the incubator, and a new Norn friend would hatch, and it would be happier. Another took up hitting other Norns, as that would make it happy. Others found ways to play with the toys together, while another Norn would refuse to share.)

The New Testament of AI researchers. The Old Testament was the whole Knowledge Representation, Expert Systems, and Fifth Generation debacle. Those utterly failed to produce anything remotely like intelligence, and demonstrated how insufficient structured knowledge representation is for representing knowledge about anything like the real world.

So now we're told that the real secret is to disorganise knowledge. No longer will AI try to impose structure on to knowledge. In fact, we're now supposed deliberately to consider systems where there is no visible location of the knowledge (so much for the great Hofstadter-Dennettesque ideas of isolating parts of other people's minds and plugging them into ours).

Here's what we're told we should do. We should take some large system of simple elements and interconnect (always interconnect; plain "connect" sounds weak) them in various ways we don't understand. This is a good place to mumble something about Neural Nets or Genetic Algorithms (we don't really understand those, either).

Now, the logic goes, we have achieved a simulation: we have one "system" (the mind) which we don't understand, and we've managed to construct another system that we don't understand. Surely they'll share features, and we will have achieved a simulation of consciousness!

The skeptic will ask why e should think so. Emergent behavior is the trump card here. "You can understand how each unit separately works," we're told to tell our skeptic, "but large enough ensembles of these units will exhibit emergent behavior which you cannot predict!"

Gee, that's great. And apparently not being able to predict the behaviour doesn't exclude claiming that it can (and will) be a simulation of thought. It could be that all this is true. But this is a claim requiring proof, not some article of faith.

I'll leave it to others (supporters, presumably) to node specific examples of emergent behavior which is more like thought than it is like the phase transition of ferromagnetic iron from its "natural" state to its magnetised state. I'll just give a sample of why this idea is not enough (on its own) to convince.

Materials are composed of atoms, and atoms are composed of neutrons, protons and electrons. The behavior of atoms and of materials is definitely a good example of "emergent behavior". So it would be natural to expect that the behavior of materials would follow from that of atoms, which would follow from that of elementary particles.

It doesn't. Theoretical calculations of strengths of various crystals don't work out: the values are far too high (this is because of flaws in the crystal; but we've no idea how to model flaws!). Glasses are even worse. And those are the simple forms of material behavior (I exclude gases, because their "emergent behavior" is immensely boring; it gets interesting during the transition to the liquid phase, which we also don't understand).

"BUT!" I hear you cry, "atomic behavior is given by behavior of elementary particles, which do behave by the known laws of Quantum Mechanics!"

True. And it is also true that the only atom to have been modeled from its constituents is hydrogen. Even something as simple as H2 (hydrogen molecule) or He (atom of Helium) cannot yet be modeled. We cannot model a glass of water (or even just its contents).

How do we know our systems which exhibit emergent behavior will "think" rather than behave like a glass of water? Or do we think there's no difference between the two?

The human brain is one example of a system that at least appears to exhibit emergent behaviour that ``thinks''. It comprises a number of units (viz., neurons) which operate individually quite simply (from a black box perspective; internally, neurons are just as complex as any other cell). There are a number of such units (100 billion or so), interconnected in various ways we don't understand.

Here's the rub: it took billions of years for multicellular organisms to evolve brains of complexity sufficient to exhibit ``thinking-like'' behaviour. How do researchers intend to simulate aeons of evolution? Genetic algorithms. Now we are faced with two problems:

  1. Not only are we simulating unbelievably complex systems, but now we're supposed to be simulating thousands or preferably millions of different such systems and combining them in unspecified ways---millions of times.
  2. How exactly do you go about measuring intelligence (thinkingness)? How long does it take to tell whether a simulated brain is intelligent? Or do we just simulate evolution in general, and hope we get intelligence rather than (say) lots of brute force?
The Scruffies may have a better chance than the Neats, but current technology, or technology forseeable in the next N decades, isn't going to give them a brain.

Log in or register to write something here or to contact authors.