The Jumbo program, as described by Douglas Hofstadter in fluid concepts and creative analogies, there was a very interesting finding: the program, which simulated how human intelligence saw analogies in a specific setting of the alphabet, was able to come up with surprisingly deep analogies. For instance, given the string ABC:XYZ, the string AABC had several possible counterparts; XXYZ was the one most frequently picked by people given this analogy, but a better answer, XYZZ, was picked by some of the participants, showing a relationship between A as the first letter and Z as the last. The program does not find this as frequently as it finds XXYZ, but when it does find it, it recognizes it as a better answer.

The question that remains is; can the computer only come up with insights that are hidden by the prejudice of the operator, so that connections that the computer makes are only mirrors of what we as the programmers already knew, or can the computer actually think?

Several peices of research seem to hint that any "insight" that a computer has (until now, at least) is a function of the input. Certain analogy programs (SME and ACME) from the mid 80's to mid 90's that were supposed to be incredibly gifted at finding deep relationships are clear examples of this phenomenon: A relationship between the flow of liquid from a full container into a less full container to the flow of heat from hotter to colder objects was a clear example of this "intiution" mirroring input. The input was essentially a graph with labeled vertices and edges, where the computer tried matching vertices and edges that have the same name, and finding how other points on the graph were then related. In the aforementioned example, the edges onthe two graphs labeled "flow" were matched, as were the edges labled "more," corrresponding to volume and heat, respectively. The claim that the program "understood" the relationship was clearly ridiculous, and in fact counterproductive, as the entire program was an application of a very important idea in pattern recognition, but only peripherally useful in AI research.

More interestingly, a program named TR that was intended to do "mathematical research" given only the concept of a set, and ideas such as recursion and uniqueness. It managed to "discover" addition, multiplication, exponentiation, and even prime numbers and several more recent ideas in number theory. This was, of course, with the frequent weeding out of unproductive ideas, but more importantly, once it had gotten to where the concept of set no longer implies the types of operations that can be performed, it ran out of ideas. It seems that a enough of the parameters of the program were influenced, unsurprisingly, by the way that modern mathematicians think about sets and number theory that it was really only going along the path laid for it. The program cannot do more than that, because the programming was done by people who didn't really have any new methods to introduce to mathematics, and therefore their program was simply a confirmation that the methods used in normative mathematics are a natural development of the particular interests that the program was given.

It would be interesting to see what would happen with a similar program that was given a relatively newer field, to see if it came up with any new ideas, or even filled in the gaps where no significany work had yet been done. It seems that the use of Artificial Intelligence in that type of situation would be able to flesh out details that people have not yet use established procedures to find. However, the field of artificial intelligence is, as of yet, unable to produce whatever it is that is unique about intelligence; that a person can actually understand something more than what they have been given originally.

This writeup comprises in part a response I had been formulating to Gartogg's writeup in The Failure of Artificial Intelligence. Whilst I would agree with his assessment that we need to be able to recognise what constitutes intelligent behaviour in order to implement it within a machine, I take issue with the stance that this need be a precise algorithmic description of how to carry out that behaviour if we are to implement it. Rather, I hope to show that an appreciation of what constitutes intelligent output can suffice for its recreation within a machine: that we can model processes that we do not understand so long as we know what we expect them to do. Ultimately it is the inability to even identify such a fitness for purpose criteria that holds AI back- but the gains that have been made through more flexible approaches to computing mean that AI has not really failed. Given this more optimistic stance, this node seems a more appropriate home for my response and as Everything is not a BBS I place it here rather than append criticism to the end of the writeup I intend primarily to discuss.

To start, I would like to draw a distinction between AI the field and AI the product; that is to point out that AI research is not solely or even primarily focussed on the creation of human-like intelligences in a computer-based medium. Such a development of strong AI would indeed be a triumph for the AI field, but there are reasons why it might not be a desirable or feasible direction to work in which I shall attempt to outline later. When AI is popularly discussed, it tends to be in relation to this sci-fi vision of anthropomorphic intelligent machines- and I readily accept that such a level of achievement hasn't occured and this could be considered a failure. What I do not see it as is as a failure of the field as a whole. Thus whilst arguing against Gartogg's critique of Artificial Intelligence, I do so in the context of AI research as a whole. This may be seen as dodging the 'real' issue (strong AI), but I would see this as changing of definitions to suit a conclusion- defining intelligence as behaviour we have been unable to replicate in machines will obviously lead to a conclusion that artificial intelligence has failed. In fact, this is a common problem for AI projects- once a behaviour has been successfully recreated in a machine, the perception of its value as a measure of intelligence is diminished. Gartogg observes that

"many previously "intelligent" actions are now routinely performed mechanically by computers: pattern recognition, mathematical proofs, even playing chess."

and later argues that

"the real failure of Artificial Intelligence is twofold; it is merely an application of previously understood ideas in general algorithmic computer science, and has done nothing truly new. Secondly, in a very real sense, it is completely goal-less, and therefore unable to succeed at defining the phantoms it chases."

These to me seem at odds. If an action that is described as intelligent in a human is performed in a computer, why should it cease to be an intelligent action? Philosophical arguments can be made regarding the difference between performing an action and having a conscious awareness of it, or even an intrinsic comprehension; see for example Searle's Chinese room. However, to try and avoid these concerns, I will use the following definition of intelligence (from "Artificial Intelligence" by Blay Whitby1):

"Artificial Intelligence (AI) is the study of intelligent behaviour (in humans, animals and machines) and the attempt to find ways in which such behaviour could be engineered in any type of artifact"

Obviously not all AI researchers would agree upon such a definition (Blay is a lecturer in Cognitive Science and AI; a robotics lab would probably have a very different vision) but it gives a suitably broad basis to work from. Certainly it is unfair to brand AI goal-less any more than, say, mathematics: it is impossible to obtain all mathematics knowledge (in fact, there are things that can be proven to be undecidable in given mathematical systems) yet within many subfields advances can be made against specific questions either for practical purpose or simply for intellectual satisfaction. It would seem strange to suggest that this inability to state or achieve a true goal for mathematics renders the subject a failure; yet this is essentially the complaint being made about AI. Personally, I would argue that AI as a field has the opposite problem- there is a candidate for an ultimate goal, the 'phantom' of strong AI; yet this goal should be recognised as being potentially as unobtainable as a complete grasp of mathematics and instead efforts tend these days to be concentrated on smaller (but no less worthwhile) projects.

In the paper Intelligence Without Representation, Rodney Brooks (Director of the MIT AI lab) gives an illustration of how trying to emulate human levels of intelligence at this early stage may be foolhardy. He suggests considering a group of scientists from 1890 who are trying to create artificial flight. If they are granted (by way of a time machine) the opportunity to take a flight on a commercial 747 then they will be inspired by the discovery that flight is indeed possible- but the lessons they learn from within the passenger cabin will teach them more about seats and cupholders than the underlying aerodynamics. Indeed, seeing that something as massive as a 747 can get off the ground could have a seriously negative effect on designs they then formulate back in their own time, oblivious to advances such as aluminium or plastics and instead assuming that any weight can be lofted into the air. Even if they got a good look under the hood, a turbofan engine would be essentially incomprehensible. So it is with the human mind- whilst an inspiration that intelligence is indeed obtainable, direct emulation of so advanced a system would be counterproductive, a case of trying to fly before we can walk.

The second 'failure' then, I would discount- but what of the first criticism: that AI has done nothing truly new beyond the application of existing algorithmic computer science? Hopefully it should be clear from Blay's formulation of what constitutes AI that if algorithmic computer science yields intelligent behaviour in a computer, then it is AI, not proof of its failure. So even if the criticism of unoriginality held, it wouldn't imply a failure of the field. Despite that, I believe it to be untrue in general- the development of intelligent behaviour through at least two methods - neural nets and genetic algorithms- is at odds to the algorithmic approach and has generated results not just in the commercial software arena but in other sections of science.

In general, to solve a problem with an algorithm requires an encapsulation of the problem at an atomic level: we can solve problem X by working through steps Y. However, there are many problems that we haven't devised algorithms for, or which do not lend themselves to such a formulation- how can we codify intuition? What exactly are the defining features of an a compared to an o when carrying out handwriting recognition? (in my case, they're virtually interchangeable!) Despite this, we know the answer when we see it: if a machine can consistently make the same diagnosis as a doctor, even if that doctor has no idea how they ultimately made it, then we can just as much faith in its diagnostic skills. We might like to know how it does it, and perhaps even more so how the doctor does it, but if the behaviour is appropriate then we have a successful implementation of AI. The handwriting example is even simpler- if the machine can output the correct ASCII text for a given scribble, it's achieved the goal of recognition.

How to go about creating such a 'black box'? In conventional CS, you'd need to devise an algorithm, but once created this need merely be implemented in the language/device of your choice. Understanding of the algorithm is understanding of the problem. However, with neural nets, the implementation is the solution. We can expose a net to a given input, and through feedback processes adjust its response until the desired ouput matches the input. Then we can hope for graceful degradation- namely that the net gives a reasonable output when presented with non-typical input (rather than simply returning an error as an algorithm may do so). This output may not be right- after all, a human could misread my writing and we wouldn't doubt their intelligence as a result- but it should be close (reading a as o is ok, reading it as z seems dubious). Further tuning of the system can hence be used to refine the quality of its output, even without an understanding of just how it gets there- as with the human brain, removing particular nodes doesn't correspond to identifiable failures (e.g a constant inability to read an i) but rather a degradation of performance (more errors per string). Contrast this with the effect of pulling a line of code out of an algorithm, which is likely to be disastrous.

Continuing this theme are genetic algorithms. Here we are again concerned with results, not methods. Given a fitness criterion, solutions can be ranked in their ability to solve a problem. Appropriate mingling of the most successful solutions should yield a new set of solutions, some better than any we previously had, others worse. Repeated iteration gives us a suitable solution without ever figuring out what makes the problem tick- this is akin to gaining rules of thumb through practical experience rather than formal study. Often genetic algorithms can find solutions entirely different to the type that a conventional algorithmic approach offers- which is unsuprising, as mathematically rigourous solutions that appeal to a mathematically-minded designer aren't necessarily the only intelligent solution. Quite often, they find ways to cheat that could be considered ingenuity, or more likely abuse of factors we haven't taken into account which cannot always be depended on. For example, given a set of photos of men and women, we might seek a system that can tell one gender from the other. If however all the pictures of the men were taken in a different room to those of the women, a most efficient solution would be to recognise the decor rather than facial features. As soon as you supply pictures from the outside world, you'd run into problems. But the system hasn't necessarily failed in the task of distinguishing the pictures, it just uses different reasons to those desired.

But if we have these methods for getting intelligent behaviour, why can't we just keep adding to the net or running the algorithm until we evolve a strong AI system? After all, we believe that evolution has produced at least one intelligent 'machine'- us. However, it seems that the biggest problem is just that: scalability. We have seen that AI methods have allowed us to tackle problems which, despite a lack of an algorithmic understanding, we can declare to be solved either successfully or not. It is this fitness of purpose criterion which is both AI's strength and weakness- we can use it to generate more forms of intelligent behaviour than algorithmic computer science alone does; but it does not offer us the complete set. The problem is that there is no single fitness criterion for intelligence- when presented with a passage from Shakespeare, what should our system do- count the words, compare the spelling to today's, examine the meter, or write an essay about its relevance to modern life? All are varying forms of intelligence (and all might get thrown at you during an english lesson) yet quantifying how intelligent such an action is, and hence refining different solutions against that benchmark, seems impossible. We might be able to implement all of them given time- but getting them to play together nicely in a system will probably need another level of insight akin to the leap from algorithms to less predictable but more flexible methods such as neural nets. Ultimately, if we want to recreate human intelligence, we may well need to understand ourselves first. This too is a goal of AI research, and for many is the end goal- not to recreate ourselves, but to know just what is that makes our intelligence special in the first place.

In conclusion then, I wouldn't argue that AI has failed either to advance our ability to produce intelligent behaviour in devices other than ourselves, nor to build upon the foundations of standard computer science. There have been many remarkable solutions to problems and the methods used to solve them are of interest in themselves. Often they have turned out to give greater insight into other fields (such as the use of neural nets for modelling the activity of human brains) or to highlight questions that we need to ask about ourselves, tying together ideas about science, mathematics and philosophy. These solutions have thus far been limited both in their scalability and their interaction with each other- which isn't helped by deep divisions within the AI community as to which methods are best; divisions which are ultimately pointless if it turns out that all these ideas need to be applied together to create a superior whole- but this should not diminish the results that have already been seen.

  1. "Oneworld Beginner's Guides- Artificial Intelligence": Blay Whitby, ISBN 1-85168-322-4
  2. "Intelligence without representation": Rodney A. Brooks, referenced in the above book and also available at

I would like to expand upon the final sentences of WntrMute's excellent write-up by emphasising an often overlooked point. The point is raised in these conclusing remarks:

These solutions have thus far been limited both in their scalability and their interaction with each other- which isn't helped by deep divisions within the AI community as to which methods are best; divisions which are ultimately pointless if it turns out that all these ideas need to be applied together to create a superior whole- but this should not diminish the results that have already been seen.

One failing of Artificial Intelligence which is slowly being recognised by the field is that most techniques have been developed in isolation (e.g. planning, vision and learning). Researchers typically separate a sub-problem off from the uber-problem then develop a novel method of solving it. Problems are picked based on current fads and trends in AI (e.g. has anyone tried solving this with evolutionary computation yet?), the likelihood of funding bodies being interested in the technique or problems (e.g. can we use it to blow things up?), or a variety of other reasons. This approach was initially taken by researchers because the big problem was too big. It has been sustained because of a messy mixture of tradition, laziness, intellectual bigotry, and of course because the big problem is still too big.

Creating an artifact with the intelligence of a 4 year old will require the application of not just one AI technique, but a whole interacting cognitive architecture of them. Every module in this architecture must work together with the other modules, providing and receiving information. A great example of this is the task of vision. An artificial vision system must work not only from the bottom up (recognising edges, surfaces etc.) but from the top down (looking for particular objects, resolving conflicts in ambiguous images). Knowledge used for the top-down parts of vision may come from memories of the scene being viewed, expectations of what should be seen, priming, and other sources. Not only must an artificial vision system provide information about the artifact's surroundings to some core processing modules, it must also give feedback to any effectors possessed by the artifact, and provide early warnings of danger (possibly unconciously).

As the above example shows, a module in a cognitive architecture must do more than process a single input to provide an output. It must function asynchronously and in parallel with many other components. Its inputs are dependant on other such modules and its outputs may feed back into these whilst effecting the behaviour of yet more modules.

The majority of AI research in the past 30 years or so has ignored this. This is a major reason why no single technique has really made strides towards solving the problem AI is famous for. It is easy to highlight the limitations of AI when every solution to a small problem is viewed as an attempt at solving the big one. That said, the failure to address architectural issues is a very real limitation on the state of the art, and one that must be addressed for the field to start making real progress.

Log in or register to write something here or to contact authors.