Searle's "Chinese Room" argument tries to make the point that machines are only capable of manipulating formal symbols and are not capable of real thought or sentience. He is using this as a rebuttal to the Turing Test and others.

Searle says to imagine an English speaking American enclosed in a room. In this room, he has thousands of cards with Chinese characters printed on them. There are Chinese "computer users" outside feeding him input, in Chinese, through a slot. The person inside, in addition to the cards, also has an instruction booklet written in English telling him how to put together Chinese characters and symbols suitable for output.

So the person in the "Chinese Room" does this and uses his instructions to produce output that makes sense to the Chinese speakers on the outside. But that person still does not know Chinese himself! He just manipulated symbols in accordance to instructions he was given by a "programmer". He has no idea what the input or output means.

The rebuttal

While the person inside the box may not speak Chinese, the entire system does!

The rebuttal to the rebuttal

The entire system is only as smart as the instructions, AND, it does not have capability to learn! Because in order to learn, you must be able to modify the instructions - and to modify the instructions you need to understand them.

Learning is the essence of what makes us sentient, and when we learn we modify our instructions so to speak. Suppose someone fed this system a sentence in Chinese - first of all could he read it - it's easy enough to write Chinese sentences - but reading them is much harder.

Second suppose the "instructions" could read the sentence - could it act on what it says - remember the instructions have no prior programming to deal with the sentence - and it can't translate it to English for the American to think up - that's cheating.

In short unless computers ever manage to rewrite their own programs they could never be considered sentient. Because they would not have a true capability to learn.

The rebuttal to the rebuttal to the rebuttal

Searle's formulation of the problem neither states nor implies that the cards with the instructions can't have rules for rewriting themselves. To take a slightly different tack, the claim that "to modify the instructions you need to understand them" is absurd on its face: it's like claiming that in order to mutate, you have to know what all the genes do.

To claim that learning is the essence of what makes us sentient is to claim that people who have stopped learning have stopped thinking: if I'm a grandmaster in chess and I'm playing against an unrated amateur, does that mean I'm not thinking?

The argument as to whether reading, writing, and translating is more difficult assumes that you know how to do it: if you don't, perhaps it's much simpler than you give it credit. So that point is lousy.

The final point is the most absurd of all: neural networks exist to rewrite their own programs. They seem to display learning in certain situations. And it presupposes that the symbols manipulated by a hypothetical program deal primarily with the intended output of the program: imagine that only one card in a million actually involved speaking Chinese, and that the others were formal symbols for manipulating and modifying the cards. You need not inject any new learning into the system at that point.

It's also helpful to remember that, when he wrote this, Searle also claimed that a computer would never be able to compete with the best players of chess in the world, and would certainly never defeat them. His argument boiled down to the belief that to play chess, you needed to think strategically, not just tactically, and that you need to plan and then execute those plans. Well, he was wrong about that, apparently.

My own take on this is that if the system has enough complexity developing from simple elements, a la John Conway's Game of Life, the program might be able to think. But it wouldn't know that it was thinking, because the symbols would be doing the thinking.

Rebuttal to the rebutt... (sigh)

I suppose the place to start is in response to fondue. While it is true that the system (chinese room) doesn't understand what it is doing, this does not rule out sentience of the system itself. My brain is a system of cells performing lots of stupid (as in non intelligent / x86 computer reproducible) jobs, and the way these jobs are performed does not vary; cellular processes are constant. However, I am sentient and I'm only the sum of my neurons...

Its an interesting debate. I am sentient, which is to say self-aware, and I myself am really only this collection of dumb cells. The two ways out of this that I can see are :

There is a non materialist solution to the dillemna; souls exist, and I had durned well better start going to mass again.

The materialist answer : all matter and all possible combinations/collections of matter are self aware in the same way that I am.

The second possibility is more interesting to me. If we assume that there is nothing in the universe beyond the rules of physics and that all matter is essentially the same (ie one lump of carbon has as much "soul" as the next, by which we mean zero) then that means that there is no good explanation why the brain inside my body is self aware if it merely a complex system of matter. If there are no properties outside of the properties of physical matter as materialism would hold, then there is no reason why my brain should be conscious / sentient and a rock should not.

Thus, either all systems are sentient, or none of them are. I'm sentient... so I guess according to the materialist perspective so is everything else, and all possile combinations of matter in the universe if considered as systems. Eek. I guess this means that some interpretation of the universe knows about it every time I download pr0n.

However, as raised in NSA's post, it may be that while everything is conscious, only some systems may be sapient. Could rocks be self aware, yet entirely unable to give responses to questions, other than sitting there and being hard?

General put-down #2

These hypothetical situations remind me of a common, everyday-type scenario which has to do with what it means to understand something. When you count, for instance, your footsteps, you do not understand the numbers. You are probably only mentally manipulating the words, and perhaps the symbols, for the numbers. If you truly *understood* the numbers, you would be able to give the number as easily in, say, octal, as in decimal. So do you know the number of steps you have walked? Or only the name and the symbol for the number in your native language and native script? I have tried counting my steps in Japanese, a language foreign to me, and noticed that I was quite often merely reciting number WORDS in the language without thinking of the numbers represented by them. I mention that it was Japanese because if I had been counting in, say, German or Polish, the number words would have been nearly identical to English for the purposes of this experiment and thus would have proven nothing.

When you multiply largish numbers, you almost certainly do not understand what you are doing. The only reason 6*7=42 looks better than 6*7=38 is memorization. If you had not memorized addition and multiplication tables, they would both look equally OK. You do not really multiply the numbers, yet you say you do. The Chinese room experiment is redundant. Do not ask about the Chinese room when you have better to work with.

The Black box argument is probably the best rebuttal to the "Turing Test" which basically states that since consciousness is subjective, the only way we can know if a computer is intelligent is to ask it questions. If its answers can't be distinguished from those of a human, the computer has to be considered capable of thought, and by extention sentient.

The philosopher John Searle's counter-argument ran like this: Suppose I'm locked in a black box with two slots in it marked "Input" and "Output." Pieces of paper with black squiggles on them are periodically shoved through the Input slot. My job is to look up the squiggles in a rule book I've been given and shove pieces of paper marked with other black squiggles through the Output slot as the rule book directs.

Unbeknownst to me, the black squiggles are Chinese characters. Outside the black box, scientists have been inputting questions in Chinese, and I've been sending back Chinese responses. My answers have convinced the scientists that the black box understands Chinese. But I don't understand Chinese at all! So how can a computer, which operates in the same way, be said to understand Chinese--by extension, to think?

Proponents of the Turing Test replied that although the person in the box didn't understand Chinese, but the system as a whole (you + the rule book + the box) does. Nonsense, replied Searle. Suppose I memorize the rule book and dispense with the black box. Now I constitute the whole of the system. People hand me symbols; I respond with other symbols based on the rules. I appear to understand Chinese, but I don't. I merely display a facility in Chinese syntax. Chinese semantics, the essence of thought, eludes me. Just so with computers.

the conclusion of the black box argument States that although artificial intelligence may be possible, it can't arise from computers as they currently understood. There's strong evidence that computers and the brain are fundamentally different. Nobody really knows how consciousness arises, but it seems evident there's more to it than just computer programs.

The human ability

Ah, that indefinable quality of the human mind. Consciousness. Intelligence. Awareness. Call it what you will, define it how you will, one thing is sure: fully functional people have it and pocket calculators don't. And our intuition is that it is vitally important, that it is an essence of what makes us us.

The ancients believed that the heart was central. It is still in language and metaphor. But people think by virtue of one organ. The brain. And the brain thinks by virtue of?

In the body, the brain. And in the brain?

Let us first observe that any explanation of consciousness that involves a consciousness, a homunculus inside the system, has not actually explained anything, just moved the problem. That is, any real explanation of consciousness must explain it in terms of component parts that are not themselves conscious.1

The brain is made up of neurons. Each component neuron is not conscious, yet the brain is. Each neuron is a living cell, made up of atoms. Each component molecule is not alive, yet the cell is.

You have a choice here, and it's one of those speared-by-the-horns-of-a-dilemma type choices: either you accept that, contrary to intuition, life is an emergent property of atoms assembled into hierarchical interacting structures, with no other added ingredients, or you believe in some kind of mystical life force. Either you accept that consciousness is an emergent property of neurons assembled into hierarchical interacting structures, with no other added ingredients, or you believe in some kind of "think force".

That is, either you buy into the project that has served us so well these last few hundred years, based on the assumption that everything is in principle, explicable, reducible, or you should go back to your medieval hut, chant mumbo-jumbo and make the sign of the cross at supernatural inexplicable things that go bump in the night.

And if consciousness is just matter moving in patterns according to rules then in theory it could be done in silicon. Or, at a push, by a philosopher with an instruction book and a really big jotter pad.

The philosopher’s error

We should expect the real story of consciousness to be difficult and counterintuitive. If it were easy, we would have cracked it by now. 2

John Searle's Chinese room thought experiment is a bunch of hand-waving designed to try to convince us that we are not just matter. That is, that consciousness in the sense that a human being has it, cannot be an algorithmic process running upon inanimate hardware 3.

What surprises me is that so many people fall for it. This, I guess, is because it reinforces their prejudices. And because they don’t examine the alternative too closely. If consciousness needs to run upon some conscious hardware, then what makes this hardware conscious? Does it too have some special "consciousness" embedded in it? I'm afraid it looks like hand-waving all the way down.

Taking this intuitively appealing stance lays you open to the question: "So how do people do it? Are we not assembled by biology out of the basic building blocks of matter? What, if anything, makes us special?"

Back to that dilemma: If cunningly assembled matter can think, then Searle's Chinese room is bogus without even going into details, as the Strong AI postulate is true. If you hold that matter cannot in itself have a mind without some special essence, divine spark, soul, essence, call it what you will; then you are living in the dark ages.

Despite this, many still hold the belief that Searle's Chinese room thought experiment constitutes some kind of proof that what a mind does cannot be reduced to some kind of algorithmic process carried out by dumb unliving unconscious atoms.

The Chinese room as a proof that machines cannot be conscious is an example of The philosopher’s error, 4 that is "mistaking a failure of the imagination for an insight into necessity". This is a line of argument that proceeds "I can't see any way how x could be y, therefore x is not y’. Or, in this case "I can't see any consciousness here, therefore there can't be any consciousness here."

The Chinese room

In the thought experiment, we are asked to imagine a man who does not speak Chinese playing the role of the homunculus inside a computer, just following detailed instructions obediently, manipulating symbols that he himself does not understand, like a CPU.

The system that he works carries out a Chinese conversation. Paper with Chinese characters on it (which he, being a Westerner, does not understand) come in one slot. He looks up rules and correspondences, makes notes, and finally sends matching output out another slot. There is rote manipulation of symbols, but no understanding. All very much like the Von Neumann architecture. Or a neuron.

To the outside world, it seems as though there is an intelligent Chinese-speaker in the room. It's not the man, so who is the speaker? Who, if anyone, is it in there that understands Chinese?

The test begins ... now

The Turing test is absurdly easy to pass when the human is unsuspecting. Eliza's canned responses can do it for a while. But it is also fiendishly hard to pass when the human is a prepared, critical judge. How would you react if I asked you the same question five times in a row. Would you, like a pocket calculator, give the same answer five times over? What if I made up a word, defined it to you, and used it in the rest of the conversation and asked you to use it too? If you cannot remember what I have said to you, and even learn from it, you cannot pass the Turing test.

If I asked you about the latest soccer, would you recite some recent facts and give an opinion on them, or like me, would you give your reasons why you don't care? If I asked you about the politics, maybe you would take the other tack. If I told a joke, you would be able to explain what makes it funny, even if you didn’t find it funny?5

The man in the room just matches up output symbols to input symbols. If you were debating philosophy, to what extent would your responses "just match up" to what was said to you? But the room passes the Turing test, so we know that the program being hand-simulated is at very least orders of magnitude more complex than any chattterbot that we have already made, and has a vast amount of internal state. So how is it clear that this is utterly different from a brain?

The Chinese room experiment is an attempt to misdirect us, by asking us to imagine something far too simple to be workable, and then use this to dismiss more complex programs as unworkable too.

The canned reply

The rest of the argument I shall leave to Daniel Dennett's own words:

The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated and multilayered system, brimming with "world knowledge" and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, its own "motivations" and the motivations of its interlocutor, and much, much more. Searle does not deny that programs can have all this structure, he simply discourages us from attending to it. But if we do a good job imagining the case, we are not only entitled but obliged to imagine that the program Searle is hand-simulating has all this structure. But then it’s no longer obvious, I trust, that there is no genuine understanding of the joke going on. Maybe billions of actions of these structured parts produce genuine understanding after all. If your response to this hypotheses is that you haven't the faintest idea whether there could be genuine understanding in such a complex system, that is enough to show that Searle's thought experiment depends, illicitly, on your imagining too simple a case and drawing the "obvious" conclusion from it.

We see clearly that there is nothing like genuine understanding in any hunk of programming small enough to understand readily. Surely more of the same, no matter how much more, could never add up to genuine understanding. But if we are materialists who are convinced that one way or another brains are responsible on their own, without miraculous assistance, for understanding, we must admit that genuine understanding is somehow achieved by a process composed of interactions between a host of subsystems none of which understand a thing by themselves.

How might we try harder? With the help of some handy concepts: the intermediate-level software concepts that were designed by computer scientists to keep track of the otherwise unimaginable complexities in large systems.

All these entities are organised into a huge system, the activities of which organise themselves around it’s own centre of narrative gravity. Searle, labouring in the Chinese room, does not understand Chinese, but he is not alone in the room. There is also the System, and it is to that self that we should attribute any understanding of the joke.


1) Daniel Dennett: Consciousness Explained
2) Ibid.
3) Perhaps you think that the way that the mind works is not supernatural, yet it cannot be represented as an algorithm. So what is it then? Perhaps your definition of algorithm is then too narrow. An algorithm is not necessarily deterministic. The hill climbing heuristic, which uses a random walk, is an algorithm. An algorithm could be adaptive or self-modifying. The concept behind genetic algorithms is itself an algorithm. If a computer CPU, which surely can do no other than algorithmic processes, is in theory capable of simulating atoms so exactly that simulated chemical reactions can take place, then is this dance of atoms not viewable as an algorithm? And given sufficient (large but by no means infinite) processor power, could not the atoms be those of a brain?
4) Daniel Dennett: Consciousness Explained
5) Ibid.

Log in or register to write something here or to contact authors.