A very interesting intellectual playground.

To understand this, you need a bit of background. Cognitive Science is mostly concerned with creating models of the mind. These models are best tested by using them to build some stab at artificial intelligence. How and if this should happen is the subject of much debate.

One way, (and for a long time it was the only way), of approaching the problem is the symbolic approach. This is closest to what computer programmers are familiar with: treat the mind as an information processor, and it is only a matter of finding out what rules it uses to process that information. Models like this are designed from the top down: start with "thinking" divide that into some more manageable bits, those bits into still more manageable bits, and in theory you will eventually end up with a theory. Problem is that this hasn't worked yet. Deciding how to divide up the bits, and which bits are which, creates a blather of paradoxes and problems. See for instance the homunculus problem. Also, things get ridiculously complex when you get down to the level of neurons.

A new approach, the Connectionist approach, turns the whole thing upside down, to vastly oversimplify. Connectionists start with neurons and work their way up. Neural nets are models of neurons, used to hypothesize about how the human brain gets from "on" and "off" to "Oh, look, that's a flower." and to figure out how to get artificial minds to do the same thing. The name "connectionism" comes from the holistic magic that seems to happen when you work with these models. None of the neruons are doing anything particularly intelligent. But groups of artificial neurons, connections of artificial neurons, do seem to behave in proto-intelligent ways. They adapt, recognize, and they learn. Spooky.

The bottom line is that neither approach is going to work alone. Using a Symbolic approach to explain why a given neuron fired at a given time is ridiculous. Likewise, describing an "idea" as an emergent property of a vast group of neurons, while it sounds cool, doesn't get us very far in our understanding of what an idea is and how it works. I see both approaches as tunneling into either side of a giant snowbank, like I used to do with my friends when I was a kid. Eventually, with enough work, the two tunnels will meet in the middle. Of course, we have to make sure both tunnels don't pass each other. The hardest part of making this snowfort will be the interface between the top down and the bottom up.

There is another approach I've been thinking about. Consider Everything. No one designed Everything. (well, someone designed the engine). It happened, and continues to happen, in bits and pieces. Perhaps a functional model of the mind, and a functional AI, would come about in a more evolutionary way, building bits and pieces, using old things as scaffolding, new things as food, and everything else as whatever it happens to be.

Log in or register to write something here or to contact authors.