Last update- April 2005

A march 20th, 2004 copy of New Scientist carried with it a story Smart Software keeps chatrooms safe, which marked the first coverage of a chatbot powerful enough to pass the Turing test. By April 4th, the online version of that article had been replaced with the following:

Serious doubts have been brought to our attention about this story. Consequently, we have removed it while we investigate its veracity.
Whilst the original story made some impact in other media outlets (such as the BBC or NTK/El Reg), the subsequent debunking generated far more of a storm online. Additional chat logs became available, and much work was done to profile Jim Wightman, the programmer responsible for the amazing claims- even to the extent of digging up USENET postings about historical revisionism, gun ownership and his feelings about pets... whilst all this detracted from serious discussion of the AI, there were nonetheless some important points made that suggest it's at best a scare tactic to discourage paedophile activity, and at worst the ravings of a delusional attention seeker.

To establish the context then, the Nanniebot is the AI component of a broader scheme called Chatnannies, run by Wightman and his wife. In fact, when I was first researching for this writeup (after the Register article), when I chanced across the Chatnannies website I assumed it was a related but unconnected project, as I could find no mention of the Nanniebots. It was only further digging with google that highlighted that http://www.chatnannies.com/ was indeed the home of the system using Nanniebots. Now, of course, there's plenty of frontpage news there about the AI claims, but my initial visit revealed a suprising lack of fanfare given the subsequent defence.

So here's how it's supposed to work, as far as I can tell. The Chatnannies website collects reports on chat room activity to give a suitability ranking to guide parents in their attempts to make 'net use safer for their children. So far so good, if you're happy with that approach (I emerged unharmed from unfiltered 'net access at home, but I can see the difficulties for schools and other public means of access where I routinely ran into blocking schemes). These reports are meant to come from two sources, human watchers known as Chatnannies, and 'field reports' from their AI equivalents, the NannieBots. Furthermore, these bots are meant to be able to identify grooming activity, so that suspect material can be forwarded to the police; as opposed to simply flagging rooms as child suitable or not.

This is where the first objection creeps in. Wightman's original claims were of a fleet of 100,000 Nanniebots- yet I haven't identified a single field report by an AI on the website. In fact, it's hard to identify many reports of any kind: hit browse and you get offered just 26 rooms with 20 reports- hardly a comprehensive list, and I'm curious as to how a room gets on the list without a report at all! So regardless of whether the AI is capable of passing a Turing test or not, it's essentially failing in its task. Ultimately, as crankysysadmin observes in the Waxy.org comments:

So I'm not saying it's not possible, but I find it unlikely. If Jim can make his service scale, regardless of what drives it, then he becomes successful in the long run even if it's humans.
Which is the saddest thing about this mess: if Wightman is so passionate about protecting children, then with or without AI his site could have made some difference, whether as a deterrent or an in-depth resource for parents. As it stands, short of storming to victory at the Loebner Prize, the whole venture becomes discredited.

Many people have based their arguments against the NannieBot on the grounds that it's incredibly unlikely that a single programmer working alone could provide strong AI. Personally, I see no reason why a self-described social phobic who's been playing with AI friends since childhood couldn't have the flash of insight that other, more systematic research has lacked; it's the technical accomplishment I take issue with. In correspondence with The Guardian's Bad Science columnist, Ben Goldacre, Wightman claims that the conversation sets aren't kept on site at Wolverhampton, but instead comprise 18 terabytes of data in a secure facility under a mountain... which is all well and good, but just how much bandwidth would you need for 100,000 instances of the Nanniebot (on just 4 Dell machines, which raises questions about the processing requirements as well) to simultanteously juggle chat room discussions and in-depth searching of that much data at a remote location in a timely manner? Another Waxy commentist, Anser, points out the following as well:

"I did like the "error:beginning core dump::modRecover" bit though - piped into the chat stream no less, like any good diagnostic... wonder which one of "the very latest Microsoft technologies" incorporates a "core dump" nowadays, hehe"

After some retaliation on that site, Jim Wightman has now gone quiet, although various changes are still being made over at the Chatnannies site (such as dropping the donations box, and letting the release date for the software slide ever into the future). He claims that we should wait and see the results when he goes for the Loebner prize; so updates to this writeup shall skeptically wait until then.

One year later... Well, no sign of Wightman at the 2004 Loebner contest. Moreover, New Scientist followed up on their original story; one of the three observers, Andy Pryke, noted extreme similarities to typical output of the ALICE bot. You can read his thoughts and detailed transcripts here. Amusingly, the test bot is named Wintermute :)

Below is my original and unmodified w/u, to make up for the absence of the New Scientist article it draws upon. For some more current sources, see the second (non-indented) bullets at the end.


A Nanniebot is a software agent operating as part of the ChatNannies software developed by UK IT Consultant Jim Wightman. Like many before it, the Nanniebot is an attempt to solve a version of the Turing Test; yet this is the means rather than the end.

Instead of developing the Nanniebots for AI research purposes, Wightman's software has been produced to assist in the detection of paedophiles 'grooming' in chat rooms. Thus as if the imitation of a young person were not challenging enough, the software has to analyse the responses not just to maintain a convincing conversation but to determine whether or not the other party is playing an imitation game themselves- an adult pretending to be a child.

Whilst current press coverage1 gives little information on just how ChatNannies attacks this latter problem, the Nanniebots are said to operate on a neural net model, comparing new sentences with those previously encountered to tune performance towards more realistic use of language. Furthermore, each implementation of the 'bot has a 'personality' determined by a multitude of randomly-adjusted parameters. Of particular interest is the fact that the software itself obtains new pop culture information from the Internet, to ensure that Nanniebots stay sufficiently current- for what child would be chatting about last week's news?

Wightman seeks further funding to expand the system, although a reluctance to sign over ownership of the technology itself means that ChatNannies is hosted on four machines at his Wolverhampton-based IT firm; sufficient to place 100,000 Nanniebots in various chatrooms.

Of course, there is the question of just how effective the system can be. Wightman personally screens any message exchanges that have been flagged as suspicious. This guards against false-positives and reduces the volume of cases likely to be generated, but also introduces a bottleneck and an extra layer of human subjectivity. Field reports on the suitability of various chatrooms are supplied to the website, although the task of researching these rooms is also carried out by human volunteers. Offending transcripts and logged details such as IP addresses are directed to the appropriate authorities, although the police have not confirmed (perhaps due to standard practice of not discussing active cases) that any of these tip-offs have been used for investigations.

So it's still too early to tell whether ChatNannies will be a success in its stated aim of tracking down paedophiles. Yet its ability to imitate human conversation or at least, the highly-restricted subset which constitutes short discussions of youth culture can be more readily assessed.

The first AI construct to effortlessly pass the 'Turing Test', after more than 13 hours of conversation the AI was still undiscovered!2
The creator claims that over 2000 chatroom users have failed to identify the NannieBots as artificial, but it's hard to see how exactly you'd test this: if you suspected the person you were talking to was in fact a 'bot, would you accuse it outright (which would presumably appear in the logs as failure) or just wind up the conversation or go idle? The New Scientist article contains a transcript discussing pancakes and (ironically) Robocop- judge for yourselves.

Sources

  1. Smart software keeps chatrooms safe, New Scientist, 20th March 2004 Issue- available online as Software agent targets chatroom paedophiles at
    http://www.newscientist.com/news/news.jsp?id=ns99994783
  2. From the chatnannies website,
    http://www.chatnannies.com/index.asp?pagetype=about
  3. First heard about through Software hunts for Net paedos, The Register
    http://theregister.co.uk/content/4/36381.html
  4. Google says: did you mean nanobot?


New Sources

  1. Waxy.org examination and comments, many additional links (thanks to JudyT for pointing this out much earlier, and the subsequent C!)
    http://www.waxy.org/archive/2004/03/23/nanniebo.shtml
  2. Guardian Bad Science article
    http://www.guardian.co.uk/life/badscience/story/0,12980,1176778,00.html
  3. Another transcript
    http://overstated.net/media/chatnannies.html