The human brain is one example of a system that at least appears to exhibit emergent behaviour that ``thinks''. It comprises a number of units (viz., neurons) which operate individually quite simply (from a black box perspective; internally, neurons are just as complex as any other cell). There are a number of such units (100 billion or so), interconnected in various ways we don't understand.

Here's the rub: it took billions of years for multicellular organisms to evolve brains of complexity sufficient to exhibit ``thinking-like'' behaviour. How do researchers intend to simulate aeons of evolution? Genetic algorithms. Now we are faced with two problems:

  1. Not only are we simulating unbelievably complex systems, but now we're supposed to be simulating thousands or preferably millions of different such systems and combining them in unspecified ways---millions of times.
  2. How exactly do you go about measuring intelligence (thinkingness)? How long does it take to tell whether a simulated brain is intelligent? Or do we just simulate evolution in general, and hope we get intelligence rather than (say) lots of brute force?
The Scruffies may have a better chance than the Neats, but current technology, or technology forseeable in the next N decades, isn't going to give them a brain.