Moravec's paradox isn't a paradox, it's just not what we expected to find. Basically, it is the observation that computers are really very dumb when it comes to things that humans are very good at, and vice versa.

Despite early expectations, it has proven devilishly difficult to teach robots to walk, computers to recognize faces, and chatbots to produce intelligible chat. Contrariwise, computers are really quite helpful when it comes to rocket science, statistical analysis, and cryptography. In other words, we can build an alien supergenius, but we can't build a functional lobster.

"The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come."
-- Steven Pinker, The Language Instinct.

The general explanation for this is that humans (and lobsters) have spent five billion years evolving really good vision, but have spent only 2000 years on statistical analysis. Meanwhile, we have spent two million years evolving our systems for language, and these are still glitchy.

While one must be careful in applying these theories to actual timelines and technologies, it is noteworthy that we have spent a very long time evolving the ability to make decisions and respond to changes in our environment, but that critical thinking and quality intelligence are fairly new on evolutionary timescales.


Log in or register to write something here or to contact authors.