display | more...

A chatbot (or chatterbot) is a computer program, or 'bot', that attempts to emulate a person who is conversing in natural language

The first one

The original chatbot was the ELIZA program. ELIZA is a general framework that used scripts for specific implementations. The 'DOCTOR' script emulated a session between a psychiatrist and patient, and that in what captured the imagination of the general public. The program is surprising simple, it made a big impression back in the mid-60's, when the computer was new and people were enthusiastically in awe of technology (see ELIZA effect). One reason for its convincing realism is that the psychiatrist just says things to encourage the patient to talk about themselves, rather than to actually converse interactively. But once chatbot designers stepped beyond the extremely limited speech style of that one kind of psychiatric session, they found it very difficult to create convincing bots, and the derisive term 'artificial stupidity' was soon used to describe them. Yet even that insult grants too much to these programs, as they are not even stupid. An artificial system that behaved entirely like a stupid person would be hailed as the achievement of 'hard AI' (artificial general intelligence).

How they work

Until quite recently, chatbots have been based on the same simple concept that powered ELIZA. They scan the natural language input for predefined words or phrases (triggers), match them with predefined responses and then output the responses to the human, sometimes performing some simple conversions of grammatical person ('you' to 'me' or 'I', etc.).

Later chatbots such as Alice, Alan, SmarterChild and the many others, were much more convincing and could handle a much wider range of topics. They were still mostly based on the same trigger-response idea, just reflex response. All of the apparent intelligence, knowledge, style and personality that such chatbots show are laboriously hand-crafted by the botmaster or programmer. Their behavior is entirely hardwired responses to selected stimuli, and there is no mechanism for any kind of actual intelligence, understanding, or autonomous development, despite the close association with 'artificial intelligence' created the popular press.

For example, suppose you say, "I'm a geek!" to a bot. If the bot had "I'm a geek!" registered as an input phrase, it might respond with "Geeks suck." or "Hey, cool! I am, too!" or "Pleased to meet you, Mr. Greenjeans!" or anything else. It depends entirely on what the botmaster has written into the fixed response to the fixed key phrase.

This simple reflex sort of reflex behavior is now being extended in various ways to produce more flexible and convincing behavior that is less predictable and less obviously produced by a machine. The cheapest step toward a generative response function is to use a Markov chain generator. The semi-randomness combined with near syntactical correctness can be entertaining. The first example of such was Mark V. Shaney. More interesting and sophisticated techniques have been used to add variety while keeping structural correctness, such as conditional branching, wildcards, regular expressions, memories, semantic relationships, reasoning, rudimentary context sensitivity, machine parsing of natural language, generative processes based on probabilities, and dialog management techniques. As a result natural-seeming exchanges like the following are not difficult to achieve:

 

Bill (human): Do you like pie?
Bot: Yes, I do.
Bill: What kind do you like most?
Bot: My favorite is blueberry, served hot with vanilla ice cream. I remember that you prefer cherry pie, Bill.
Bill: Yes, that's right. You have a good memory.
Bot: You're too kind. My memory is improving, though.

The current mainstream seems to be bots designed on entities and intents, aided by machine learning. Such bots abound and are commonly used on virtual assistant platforms such as Amazon Alexa and Google Echo. The designer simply provides examples (more the better) and the platform uses ML techniques to extract the user's intents (what they want to accomplish, order pizza, troll reddit, etc.) and entities (things such as pizza, reddit, the moon, etc.). Although this makes it very simple to create a bot, the bot has only a narrow range of use and remains a simple automaton.

Worth the trouble?

The practical value of a natural language interface is productivity: saving time, effort and money by automation. The biggest problem in practice, however, is that the current level of chatbot technology used in building and maintaining a sophisticated bot (ML) is itself extremely labor-and-time-intensive. Some bot development systems make it easy, fun and labor-intensive; others make it numbingly difficult and labor-intensive. Jabberwacky lets you create a bot just by talking to it; a Jabberwacky bot kind of absorbs your style and knowledge through conversation with you--lots and lots and lots of conversation. BuddyScript, the system behind SmartChild, gives you great power to build a sophisticated bot, so you either learn yet another scripting language and spend a lot of time in development or you hire an expensive specialist. In the middle of the extremes are systems like Pandorabots (based on AIML, the Artificial Intelligence Markup Language in which the ALICE bots are defined) and the Personality Forge, which excels at flexibility in pattern recognition and generation. The current entity-intent platforms are the easiest, but won't satisfy anyone interested in developing sophisticated services.

This practical barrier to development is very similar to the one that stunted the development of early expert systems and knowledgebases. It forced developers to take the easiest way out and focus on smaller and smaller 'domains' (restricted topics), but even that never really solved the cost problem. The same strategy is now being used for chatbots and the 'semantic web', etc., unfortunately, with the same lack of effect. It would be fair to say that, currently, sophisticated chatbots are only worth the effort of making them for high-ticket applications that have low expectations or hobbyist enjoyment.

Overcoming the problem

Reducing the very high cost of developing sophisticated chatbot that have general application, the labor-intensive bot creation process requires self-developing systems that can build themselves by genuine learning through natural, direct interaction with peopl and by reading documents or accessing databases. For AGI, the ability to experience and interpret multi-sensory data and innate motivations are also needed. That goes way beyond simple associational memories (John:favorite-food:pizza, etc.). They will need to embed some emotional functions as well as reasoning functions. They will have to create mechanisms that turn language into concepts and turn concepts into language by a generative process rather than mechanical predetermined responses. Those functions will all have to be driven by the bot's own purposes and motivations. The advanced mind-emulating bot will also have the ability to both learn from and to influence the external world for purposes and sub-purposes of its own, purposes that may emerge automatically from basic motivations tempered by 'pain' and 'pleasure' rather than being implanted by a designer.

Early chatbot design was done entirely by trial and error, bottom-up, having started from the simplest of possible behaviors (reflex). Development was driven by competition and the promise of commercial rewards. A chatbot was built and tried, deficiencies are noted, and specific pragmatic solutions are contrived and tested. The situation is much the same even now.

The philosophers of mind and most AI folks, on the other side, are nearly all top-down thinkers; they prefer to sketch out neat, grand architectures and they spend a lot of time defining things and arguing over the definitions. They are in no particular hurry to get anywhere, and a 'my way is the only way' mindset is common. I believe that these two camps are actually working toward a common middle ground, and they are destined to meet at some point, much like tunneling teams working from opposite sides of a mountain.

The theorists, on their part, need to break through the logjam of term definition by setting goals and test criteria that are based on behavior rather than abstract 'mental life' or 'what it's like to be' some conscious thing or other, a dense fog being generated by certain philosophers of mind. They need to move away from excessively abstract theories and hokey 'thought experiments' towards practical architectures that engineers can begin building with. The system developers and engineers, on their part, need to grasp the need for architectures that are more complex than ML can provide. 

Rapidly-developing neuroscience rien close to a middle, and have much to contribute to both those working from the bottom-up and those working from the top-down. The functional architecture of the mammalian brain is being worked out in greater and greater detail. We know pretty well how individual neurons work, how groups of neurons work together, how the nervous system is coupled with the endocrine system, and how sensory input works. We've mapped out how damage to certain parts of the brain affects specific aspects of behavior and consciousness. Several decades ago, the brain was largely a theoretical black box and little of its functioning was known. Now, the neuroscientists have shrunk the black box part considerably and continue to shrink it. The understanding produced by neuroscience is illuminating the concept of mind with bright light, and it can serve both as blueprints for bot development and guidelines for theorists, thus facilitating the convergence of theory and practice. An important development is that 'artificial neural networks', which are abstracted from brain structures, are now known to be fully described by linear algebra, greatly reducing computation time.

What chatbots might become

This path toward machine use of natural  language that started with mechanical triggers, pattern matching, and hand-written responses may well eventually lead to the holy grail of genuine general artificial intelligence. The reason for this convergence of chatting and AI is that language is the behavior that most fully reflects the large and complex set of human competences that we call intelligence. Language is how we share and grow our knowledge, the product of intelligence. One indication is that natural language 'processing' is pivoting toward natural language 'understanding'.  

In the meanwhile, practical applications of natural language interaction with machines will still be extremely useful, even if they fall far short of AGI. Much utility can be provided by language bots that have even very limited language skills. Automation in call screening, customer service, access to a database or knowledgebase, non-player characters in games, interactive literature, operation interfaces for computers and other mechanical systems, various aspects of education, and even sex, friendship and love. Such practical goals motivate vigorous research and hobbyism concerning chatbots. Demand for better performance insuch a variety of applications may well drive us toward an eventual AGI.

In any case, there will very likely be engineered persons (chatbot descendents) in your future.

Log in or register to write something here or to contact authors.