AI means Artificial Intelligence - that is, the ability for a computer to "think". From the very first computer games, programmers have tried to simulate AI, but none come close to the intelligence that is displayed by people, although some simulations (eg. rats) have been acheived reasonably well.

A more common goal is to have a system that merely appears to be intelligent - but nevertheless follows a strict set of rules which enable it to show characteristics of thoughts. For example, the main AI development area seems to be computer games. In a racing game, the AI need only be very simple - if its too far left, slow and turn too far right, slow and turn, otherwise speed up. Obviously, the next step up would be to take into account the next turning, in order to slow down prior to reaching it.

Unfortunately, it is often the case that the system demands much more "intelligent" AI, eg. in a First Person Shooter (FPS), the AI needs to turn, see, hear, run away, gang up etc. (or at least appear to be doing these things).

The other sort of AI ("proper" AI) involves the computer working something out based on what has happened, the most common being in an "expert system", which has a database of problems, solutions, and what works when. If the problem is not listed, it can try and find a close match and recommend that, then, based on what happened, will add this problem and it's solution to the database.

So, let's say that Everything2 is just a waste dump for AI. It sits and festers in the dump, waiting to be picked up by the BFI blue trucks. However, somehow this particular receptacle has been overlooked.

More and more garbage gets thrown into the ooze. The heat generated underneath the pile becomes intense.

In error, some sentient nodes have been thrown in long after the pile began. These nodes, unwilling to be burned in the chaos sure to come within mere days, begin to assimilate what they can from what's been thrown in.

There's an idea about code writing there. There's a philosophical treatise here. Oh, wait: Here are actual chemical formulas!

FAST FORWARD to the not too distant future

You find yourself sitting at your terminal, writing nodes. But they are not the nodes you thought up any longer. They are the nodes that your dumpster buddy told you to tell him about.

He wants to know, and he wants to know right goddamn now.

Being a biologist, I am not an expert in the field (disclaimer or cop out =)), but I know that in computer science, most concepts are built using logical deduction from a set of axioms. This got me to wondering whether in the field of artificial intelligence, or AI, if there is some sort of a standard, rigorous definition for intelligence. If one had such a definition, one could then develop a metric which could define how intelligent a particular computer system or algorithm implementation is.

If intelligence requires the ability to adapt to a new situation, then the trivial, but fundamental implementation of artificial intelligence might be the infinite rule set. This is the stupid way to solve the AI problem. That is, if you're building a chess computer, why not program it with every possible outcome for every possible scenario ... which may be an infinite set, or it may be finite if eventually, scenarios become degenerate. Still, that doesn't really solve the AI problem, for an infinite rule set could, for any given instance, take an infinite amount of time to parse and implement.

Starting with the infinite rule set as the axiomatic definition of intelligence, then, it becomes the challenge of computer scientists in this field to implement a meaningful subset of this infinite rule set and find a way to traverse it in a workable amount of time. These are two separate challenges. By limiting the rule set, you are limiting the options the algorithm accesses, therefore affecting its overall knowledge. Its adaptability will also depend on how fast it can find the right bit of knowledge.

Creativity may also be considered a subset, or defining parameter in intelligence. Empirically, creativity is the impression that one has developed a new idea that is either logically unrelated but important, or is obtained through some path of logic that is unlikely to be traversed based on standard experience. The first could be implemented based on rule set searching algorithms that have a stochastic component to them. Akin to Monte Carlo methods for finding the solution to a problem, a stochastic component to rule searching could allow the best answer to be reached by a random jump to a new logic path that is unrelated to the original one being pursued.

These are just some thoughts on quantifying intelligence for the sake of developing computer models of intelligent systems. Currently available definitions such as the Turing Test, are in many ways unsatisfying, because they do not have the axiomatic rigor found in other aspects of computer science. For further reading by someone who has thought a lot more about this issue than I have, look at the work of Marvin Minsky.

Source: Bar conversation at the Club Charles in Baltimore.

Artificial Intelligence can be defined as A computer doing something that we humans are capable of, without knowing how.

Limited to mental activity, that is. It's tempting to limit it further, to cognitive activity, if it weren't for the fact that nobody really knows what cognition is.

This incorporates the popular use of the word (used to describe the actions of computer players in games, for instance) where the ignorance is on the part of the human player, but does not extend to the game programmer; and the 'scientific' use of the word, where the goal is often to create a program that outsmarts its creator.

In theory it should be possible for humans to create artificial intelligence since the biological intelligence generated by our brains is simply a product of the behaviour of neurons which either fire signals or remain inactive based on the strength of their input signal.

This type of behaviour has been modelled in neural nets which use Boolean logic - a form of algebra in which all values are reduced to either TRUE or FALSE - to produce a 1 bit binary output (either 1 or 0).

At present, neural nets cannot begin to emulate the incredible complexity and interconnectedness of the human brain, however, and I feel that a much deeper scientific understanding of the brain is required before anything approaching true artificial intelligence can become a reality.

Artificial Intelligence (AI) is an academic discipline primarily concerned with creating the concept of the same name. The definitions of this concept can be split into four different classes;

Systems that think like humans.
Systems that act like humans.
Systems that think rationally.
Systems that act rationally.

Here are some textbook definitions of Artificial Intelligence, sorted into the four different classes;

Systems That Think Like Humans
"The exciting new effort to make computers think ... machines with minds, in the full and literal sense" (Haugeland, 1985)
"The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning ... "
(Bellman, 1978)

Systems That Act Like Humans
"The art of creating machines that perform functions that require intelligence when performed by other people" (Kurzweil, 1990)
"The study of how to make computers do things at which, at the moment, people are better" (Rich and Knight, 1991)

Systems That Think Rationally
"The study of mental faculties through the use of computational models"
(Charniak and McDermott,1985)
"The study of the computations that make it possible to perceive, reason, and act"
(Winston 1992)

Systems That Act Rationally
"A field of study that seeks to explain and emulate intelligent behaviour in terms of computational processes" (Schalkoff, 1990)
"The branch of computer science that is concerned with the automation of intelligent behaviour"
(Luger and Stubblefield, 1993)

The reason behind this multitude of definitions is the amount of different research going on in AI. It is a subject that has links with many disciplines, including; Psychology, Philosophy, Linguistics, Physics, Computer Science, Cognitive Science, Neuroscience and Artificial Life.

Key figures in the modern development of AI are;
Alan Turing (the Turing Test, "Computing Machinery and Intelligence")
John McCarthy (LISP and Common Sense Reasoning)
McCulloch & Pitts (Neural Networks)
Norbert Wiener (Cybernetics)
John von Neumann (Game Theory)
Claude Shannon (Information Theory)
Newell & Simon (The Logic Theorist)
Marvin Minsky (Frames)
Donald Michie (Freddy)

According to Marvin Minsky in 1997, there are three basic approaches to AI: Case-based, Rule-based and Connectionist reasoning.   The idea in Case Based Reasoning (CBR) is that the program has many stored problems and solutions.  When a problem comes up, the computer tries to find similar problems in its database by finding aspects the problems share.  However it is very difficult to identify which aspects of a problem might match new problems. Rule-Based reasoning, or expert systems, consist of a large number of rules detailing what to do when encountering a different input.  Unfortunately you can't anticipate every single type of input, and it is very hard to make sure you have rules that will cover everything.   Connectionists use big networks of simple components similar to the nerves in a brain. Connectionists take pride in not understanding how a network solves a problem.  Unfortunatly this makes it very hard to make a soloution that works for more then one problem.

The top grad schools for the subject in 2002 are as follows;
Georgia Tech

"AI - A Modern Approach", Stuart Russel, Peter Norvig, 1995.
"Hal's Legacy", David Stork, 1997

Artificial Intelligence

A system created by another sentient being (e.g. humans) that is capable of cognitive analysis and unique production or output. Currently there are two primary approaches to AI: Atomic Components of Thought or ACT, and connectionist (or neural network) models.

ACT models of cognition use elemental tasks and then build larger sequences of actions composed of these elements. The atomic components can be either actions or declaration of certain facts. The components are strung together according to circumstance, and most implementations of this structure are capable of incorporating new knowledge into the architecture. While there are many advantages of this method, it is severely restricted in its capability to generate multiple solutions or action sequences, and is heavily dependent on the a priori information programmed into the architecture.

On the other hand are connectionist models that use parallel processing to generate a system of solutions that simultaneously satisfy multiple constraints. Connectionist models involve three layers: input, hidden and output layers. The hidden layer is driven by a differential equation that in most cases is trying to reduce the error between the output calculated by the hidden layer and the target output provided by the programmer. The advantage of connectionist models is that they are capable of taking inputs that they have never seen before, and generating outputs consistent with their experience. If the input is totally unrelated to the network's experience, the generated output will be meaningless. However, if the input is part of a set with which the network is familiar, and there are sufficient hidden layer units to properly represent the situation, then the network can output highly accurate responses even if it has never seen that particular problem before.

Connectionist models have many disadvantages as well, though most are specific to a particular implementation. In the example above, the network requires target outputs to determine the error of its guess, if there were no programmer providing these targets then these models would be useless. There are connectionist models that do not require target outputs, the network simply compares different input sets and looks for a set of relationships. However, these networks have great difficulty dealing with non-uniform sets, ie problems that do not have a central tendency or definite relationship.

Overall artificial intelligence is a far way from being in everyday use. While quite capable of modeling human behavior, both the ACT and connectionist architectures are difficult to translate in to actual practical devices. While the US Air Force and certain IT solutions providers have begun to implement these models in more advanced programs, we are far from designing the HAL 9000.

Log in or register to write something here or to contact authors.