Man or machine?

If, online, you can’t tell the difference, the computer program you’re querying can be said to think.
So postulated Dr. Alan Turing, famed British mathematician and cryptographer, in his seminal investigations into machine intelligence. The Turing Test is the standard by which machines will continue to be judged as we move ever-closer to the other-creatures in Spielberg-Kubrick’s A.I..

Ironically, over the past fifty years, computers have been able to “master” thought processes we might consider “difficult”—things like chess or medical diagnosis—much easier than they have been able to “hear” or “speak" or “see”.

Dr. Jitendra Malik, who researches computer vision at the University of California at Berkeley, tells us that “Abilities like vision are the result of billions of years of evolution and difficult for us to understand by introspection, whereas abilities like multiplying two numbers are things we were explicitly taught and can readily express in a computer program.”

This obvious discrepancy between human and machine intelligence is at the heart of an interesting program that grew out of a very real concern of Dr. Udi Manber, the chief scientist at Yahoo, the internet portal, in September of 2000.

Yahoo at the time was infested with bots that masqueraded as teenagers and collected personal information from subscribers. Links were posted to commercial websites and hundreds of free Yahoo accounts—created by hacker scripts—were used for bulk mailings of millions of pieces of spam. Things were out of control.

“What we needed was a simple way of telling a human user from a computer program,” said Dr. Manber. The first thing the company needed to do, he reasoned, was prevent automated registrations.

Manber placed a conference call to Dr. Manuel Blum, a cryptographer at Carnegie Mellon University in Pittsburgh, who theorized that the essential failures of A.I. theretofore were precisely the solution to Dr. Manber’s problem.

Dr. Blum, together with his Ph.D. student Luis von Ahn, Nicholas Hopper, and John Langford, devised a sort of reverse Turing Test, a series of cognitive puzzles that, ironically, computers could generate and grade but could not pass.

Blum called the puzzles Captchas, Completely Automated Public Turing Test to Tell Computers and Humans Apart.

Their first Captcha was called Gimpy. It consisted of a display of seven randomly-chosen words that were overlapped and distorted. In order to pass the test, three of the words had to be identified and typed into a dialog box. Humans could do this easily; machines could not. A simplified version of Gimpy, which contains a single word, distorted against a complicated background, is now part of Yahoo’s registration process.

Sounds was a second Captcha. A distorted computer-generated sound clip containing a word or a sequence of numbers was presented to the user, who had to type the word or number into the box provided.

Blum and his colleagues were not the first to attempt to shut down on-line registration mischief. AltaVista and PayPal already had systems in place, and Hewlett-Packard held a patent on text-based solutions. But Blum “did a great thing by recognizing that this problem is much more than solving a nuisance for Yahoo and AltaVista,” said Dr. Andrei Broder of I.B.M., who developed the AltaVista solution.

Blum recognized that there would always be breaches in security measures on the internet because that is the nature of encryption and the cryptographer. What he hoped to do was motivate other researchers to create better Captchas while they built programs that cracked existing ones.

“Captchas are useful for companies like Yahoo,” said Dr. Blum, "but if they’re broken it’s even more useful for researchers. It’s like there are two lollipops and no matter what you get one of them.”

The earliest Captchas have already been broken, and the Captcha bar, so to speak, has been raised. Dr. Malik and his associate Dr. Serge Belongie have developed an object-recognition technique that has some of the properties of human vision.

A Gimpy-cracking program written by Greg Mori, one of Malik’s students, which utilizes the Malik-Belongie methods, is able to give the right answer over 80 percent of the time. More difficult Gimpy puzzles are solved after only three tries.

The research has many other applications as well. “We want to keep working on this in a principled way," says Dr. Malik, “so we can use the same technique on an outdoor scene with buildings, trees, and cars.”

In addition to the military (which always seems to get its heavy foot in the door first), Captcha programs are already in use by online polls, free E-mail services, search engines, and preventions against worms, spam, and dictionary attacks in password systems.

One of the best examples of why Captchas, more and more, will play an important role in the future of computing was demonstrated back in November of 1999. E2’s very own http://slashdot.com released an online poll which asked: Which is the best graduate school in computer science?

IP addresses of the voters were recorded, as is usually the case with online polls, to keep people (or machines) from voting more than once. Students at Carnegie Mellon (is there a pattern here?) devised a way to stuff the electronic ballot box thousands of times. Not to be outdone, by the next day students at MIT had programmed their own bot and the virtual voting race was on.

MIT finished with 21,156 votes and Carnegie Mellon had 21,032. Every other school registered less than 1,000.

I'll give the nod to Carnegie Mellon, for thinking it up in the first place.


Human or Computer? Take This Test, Sara Robinson, The New York Times, Dec 10, 2002.
http://nytimes.com/2002/12/10/science/physical/10COMP.html?8hpib
http://www.captcha.net/