Ever since Alan Turing and Alonzo Church developed the Church-Turing thesis, stating that all problems that a human could solve could be reduced to a set of algorithms, mankind has sought to reproduce his intelligence through machines. John McCarthy coined the term "artificial intelligence" to describe this new field of development in 1956. Turing predicted that by the year 2000, machines would be as intelligent as, if not more intelligent than, humans. Though modern artificial intelligence research has not produced the outcome Turing had predicted, the advances of artificial intelligence are quite impressive nonetheless.

After Turing and Church made that statement, computer scientists began the early stages of artificial intelligence development. They began by simulating simple logic problems in terms of semantic nets, and then recreating the algorithms representing those thought processes on computers. Since computer operations are designed to be logical, rudimentary artificial intelligence programs appeared as early as the mid 1950's.

The first true AI program was known as the Logic Theorist, a project led by early AI pioneer Allen Newell. Logic Theorist was able to prove mathematical theorems much like a human would. It knew the given information and the goal, and searched through possibilities and rules in order to find a path to the goal. With the development of Logic Theorist, many other computer scientists began expressing interest in artificial intelligence.

Logic Theorist was a precursor of what is known as an expert system, because it could not learn on its own. The only rules it considered when making decisions were those that its creators supplied it with. As a result, many AI researchers began to consider alternate methods of learning and input for artificial intelligence programs. However, expert systems provided with a database of knowledge continued to dominate the very early stages of artificial intelligence research.

A program known as ELIZA came about after the development of the Logic Theorist. ELIZA was given knowledge on a number of common conversational topics, and could hold a conversation with a user. The point of ELIZA was not to pass the Turing test which Turing had proposed; ELIZA's developers knew that this goal was out of reach for the time being. Instead, ELIZA's existence went a long way to prove that machines could process human language very well.

In the late 1960's a program called SHRDLU was developed. SHRDLU simulated a world of blocks with certain shapes and colors. SHRDLU could be instructed, in a simplified version of English, to place certain blocks on top of other blocks and extend the world. When asked what blocks supported which other blocks, SHRDLU answered correctly. However, when other researchers played around with SHRDLU, they realized that it could not be applied to a real world situation. It was clear that the devlopment of common sense lagged far behind the development of logic in artificially intelligent systems. The idea of image recognition, as well, had not even been considered yet.

The United States government, in particular, saw the potential in the development of AI. DARPA began funding major AI programs, such as the AI Labs at MIT, Stanford, and Carnegie Mellon. In the early 1970's, the first major expert system was put into commercial use. MYCIN, developed by a researcher at Stanford, took patients' symptoms and test results as input and recommended medication based on that input. Doctors were originally very skeptical of a computer program that could diagnose such problems, but they quickly gained trust in MYCIN. It was able not only to make a recommendation based on strict rules, but also accounted for other factors of probability. Later versions of MYCIN were able to diagnose and recommend medications better than many medical professionals.

In the mid-1970's, many people began criticizing AI. Research in Britain ground to a halt. DARPA's investments had not paid off as much as they would have liked, so they cut funding to the programs they were supporting. There were very few programs like MYCIN that supported the idea of continual research.

Thus began a small Dark Age for artificial intelligence. Though research was not funded as well, the tools and principles that had been formed during the first twenty years served researchers very well, and they continued their work. It was clear that AI development would have to break into the business world and follow in the footsteps of MYCIN if it were to gain respect as a serious field.

In the 1980's, AI began to be applied to mainstream business and industry. Expert systems were developed to advise workers in many fields. Bell, the phone corporation, employed an expert system to analyze and propose solutions for their telephone networks, using a vast database of technical knowledge. Other expert systems used millions of factors and data to predict weather. General Electric developed a system known as DELTA that diagnosed and helped to solve problems with electric trains. The huge advantage that these systems had was their huge databases. The sum of human knowledge about problems that could be faced was programmed into the expert systems. The systems were still not as good at drawing conclusions from those data as humans would be, though.

Computer image recognition found its roots in the 1980's as well. It has proven extremely difficult to get computers to recognize simple things like shapes, which human beings learn within the first few years of life. Attempts at developing systems that learn such things are showing promise. Scientists have been experimenting with robots that can find their way through mazes and otherwise interact with their environment. Sony's AIBO, the robot dog, is a very basic example of how computer image recognition has been combined with the ability to learn.

At the turn of the millennium, systems of artificial intelligence have permeated our lives. The immense size of the World Wide Web has made the existence of Web search engines necessary. Each search engine has at least the very basic principles of artificial intelligence behind it. One of the most popular search engines, Ask Jeeves, takes it one step further and analyzes questions posed by users. Speech recognition programs can take spoken words and translate them into text quickly and accurately. E-mail filters protect users' inboxes from unwanted spam. Systems can analyze purchases made by people and help to detect credit fraud. All of this has come as a result of the development of artificial intelligence.

It can be argued that the greatest advances in AI are not the applications that we can already see in the business world. Through all of the research that has been done already, the basic principles and tools for further research have been laid out. Semantic nets, neural networks, adversarial searching, cognitive modeling, and various methods of learning have all been studied and facilitate the development of systems. At least three computer languages, IPL, LISP, and PROLOG, were developed originally for AI. PROLOG was developed in France and is a particular favorite for AI programmers outside the United States, where LISP and its dialects are the main languages in the grand Scheme of things. These developments have truly set the stage for the next generation of artificial intelligence development.

Log in or register to write something here or to contact authors.