CEPA eprint 4208

The key to the Chinese Room

Gallagher S. (2009) The key to the Chinese Room. In: Leidlmair K. (ed.) After cognitivism: A reassessment of cognitive science and philosophy. Springer, Dordrecht: 87–96. Available at http://cepa.info/4208
Table of Contents
The Systems Approach
An Expanded System
The Internalist Objection to the Expanded System
Turning the Key
References
John Searle’s famous thought experiment concerning the Chinese Room (CR) is cast rhetorically in terms that are standard for the target it seeks to defeat, the strong computational claims made about human intelligence by “strong AI” (Searle 1980). Thus, the problem is laid out in terms of physics, syntax, and semantics. The CR argument demonstrates that semantics cannot be reduced to computational syntax – or that syntax by itself can never give you semantics (intentionality, meaning).
In brief, the argument is in the form of a thought experiment in which a non-Chinese-speaking (e.g., English-speaking) person is installed in a room. The room has a table, a large book containing a set of rules, and paper on which to write. There are two slots in the walls – an entrance and an exit slot. Through the entrance slot pieces of paper containing Chinese characters come into the room. Each time that this happens the person has the task of writing Chinese characters on blank sheets of paper, using the book of elaborate rules which tell him which characters to write when he sees a specific combination of characters on the paper that comes in through the slot. He then pushes what he has written through the exit slot. Unbeknowst to this person, the Chinese characters that he receives from outside of the room are questions composed by Chinese speakers. If he follows the set of rules perfectly, the Chinese characters that he writes and pushes through the exit slot are answers to precisely those questions. From the outside, observers conclude that the person in the CR understands Chinese. The person in the CR, however, does not understand Chinese, and doesn’t even know that he is processing questions or composing answers. He is performing a set of syntactical operations, following the instructions (the syntax) contained in the book. Thus, Searle concludes, there is no understanding of Chinese, no Chinese semantics or intentionality involved.
Not everyone, of course, accepts this argument or considers it a perfect or knockdown demonstration against Strong AI (e.g., Boden 1990; Cole 1984; Copeland 2002; Dennett 1991; Fodor 1991; Haugeland 2002; Maudlin 1989; Rey 1986). For purposes of this paper, however, I want to fully accept Searle’s point that syntax does not add up to semantics. That still leaves the question: What does give us semantics? In terms of the argument, what else do we need in the physical-syntactical system to make it a system with semantics?
The CR may not have been designed to answer this question; its design was specifically framed in terms of defeating strong AI using the categories that AI was using at the time. The subsequent discussions of the CR argument, and the problem of semantics, hover around issues concerning necessary and sufficient conditions for semantics. I suggest that the design of the CR argument, although perfectly adequate for purposes of critiquing AI, nonetheless frames the problem of semantics in a way that oversimplifies the cognitive system, leads to one particular answer and excludes others. This is also the case with the various “replies” that were made to CR.
The “systems reply,” for example, states that it may not be the syntax alone, but the whole system – the syntax and the physics (the person, but also the room, the Chinese characters, the rule ledger, etc.) – that generates the semantics. My intention in this paper is not to champion the systems reply or to use it to defend Strong AI. But I’ll take the systems reply as my point of departure, and I’ll begin by asking: What precisely are the elements of the system, or what other elements need to be added to the system if we are to explain semantics? I’ll develop this view along lines that also incorporate aspects of the “robot reply,” which argues that the system has to be embodied in some way, and exposed to the world outside of the CR. This kind of approach has already been outlined by others (Rey 1986; Harnad 1989, 2002; and especially Crane 2003), but I don’t follow these lines back to the position of an enhanced and strengthened AI. Properly constructed, this hybrid systems/robot reply – or what I’ll call more generally, the systems approach – doesn’t lead us back to the tenets of Strong AI, but can actually serve Searle’s critique. Indeed, I’ll suggest that the best systems approach is already to be found in Searle’s own work, although Searle misses something important in his rejection of the systems reply and in framing his answer to the question of semantics in terms of the biological nature of the brain.
The Systems Approach
Searle argues that the systems reply, which he attributes to Berkeley (not the philosopher, but, curiously enough, part of a university system), does not adequately counter the Chinese room argument. The systems reply, as summarized by Searle (1981, pp. 288–289), is this:
While it is true that the individual person who is locked in the room does not understand the [Chinese] story, the fact is that he is merely part of a whole system and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has ‘data banks’ of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual, rather it is being ascribed to this whole system of which he is a part.
On this view, the system as a whole understands Chinese. But what elements constitute the system? The syntax, the data bank of Chinese symbols, a “workspace” where calculations are made, the room itself, and so on. Searle’s response is that if we internalize all the elements of the system – i.e., memorize the rules and symbols and let the person compute these things in his head, the person will still not understand Chinese. Searle even goes so far to say that “we can even get rid of the room and suppose he works outdoors.” Even in that case there is still no understanding of Chinese.
Searle’s response motivates some questions: What elements make up the system? What does it mean to internalize the system? What does it mean to work outside the room? Searle includes the rules, the data banks of symbols as elements of the system, elements that can be written down on the paper that the person uses to do the work. He contends that, in principle, they can be internalized, by which he means that they can be put into memory (“in his head”). Moreover, this seems to be all there is to the system: “The individual then incorporates the entire system. There isn’t anything at all to the system which he does not encompass” (1981, p. 289).
This sets up Searle’s sarcastic apology for even considering this as a viable reply to the CR argument. “Actually I feel somewhat embarrassed even to give this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese” (p. 289). Is Searle’s sarcasm justified? I want to suggest that both the original systems reply and Searle’s response oversimplifies the story in a threefold way.
First, syntactic rules and the database of Chinese characters cannot be reduced to scraps of paper. The combination of these two finite sets (rules and characters) yields, for all practical purposes, an infinite linguistic system.
Second, the individual in the CR already is an intentional system (already possesses semantics) and not just a memory bank. Since the person understands the English-language instructions, there is clearly some kind of English intentionality in the CR. Despite Searle’s claims that “I [the person in the Chinese room] still understand nothing” (285); that “I have everything that artificial intelligence can put into me by way of a program, and I understand nothing” (286); and that “a human will be able to follow the formal principles without understanding anything” (287), still he cannot fail to say, and he does say that “the rules are in English and I understand these rules as well as any other native speaker of English” (284). The individual in the CR not only understands English, but also understands the rules as syntactic rules, or at least understands how to apply them (as Margaret Boden 1990 has pointed out). The individual may also believe or doubt that he is following the rules correctly, and may enjoy or not enjoy doing so, and so forth.
Third, it is not clear that to “internalize” the system means simply to convert it to memory. Human memory, in contrast to a computer’s memory bank, is leaky. It leaks in the sense that it is always and already interactive with a full intentional system. For example, if I see the Chinese character (‘man’ or ‘human’) often enough, it could easily spring to my conscious attention, without my actively calling it up, when I see my daughter draw a stick-man. For a less transparent reason, the character, might serve to remind me of my own situation as the occupant of the Chinese room. Without knowing the Chinese meaning of the characters one might still discern similarities in shape between and , which looks a bit like a stick-man pushed into a small room, and which, in Chinese, actually signifies ‘confinement’ (see Wieger 1965). A character may have such aesthetic appeal that it starts to manifest itself in my sketches or doodles. Or a syntactic rule designed to function in the CR may invade my concentration when I am attempting to solve a mathematical problem.
So, to internalize syntactic rules and Chinese characters is not simply to commit them to memory; it is rather to introduce a potentially infinite linguistic system into a general and “leaky” system of intentional experience that tends to see meaning wherever it can find it.
If, however, we ignore these complications and adopt the oversimplified concept of system, then we still have the question, whence semantics? Searle argues, correctly, not from the syntax. But the only thing left in the system, as construed, is the physics – and as applied to human cognition, this means the neurophysiology. For Searle, semantics/intentionality is an emergent property of the brain, not because of its quantitative complexity (although Searle does not deny this kind of complexity), but because of its biological nature. “Whatever else intentionality is, it is a biological phenomenon and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena” (1981, p. 305).
Dennett (1991) adopts some version of the systems reply, and he claims that the complexity of the system matters. This, he claims, involves adding “more of the same.” He wants to add more of the same elements that Searle identifies as part of the system, that is, syntactic rules and databases, and in contrast to Searle, to reduce the person’s intentionality to the syntactical processes in the system, specifically to the formal syntax of the brain. This, according to Dennett, would enrich the system sufficiently to produce the semantics. “We see clearly enough that if there were understanding in such a giant system, it would not be Searle’s [as the occupant of the Chinese room] understanding since he is just a cog in the machinery, oblivious to the context of what he is doing” (1991, p. 438). What Searle would want to call the minded semantics, Dennett attributes to the “mindless routine for transforming symbol strings into other symbol strings according to some mechanical or syntactical recipe” (438). The brain processes that Searle thinks so important, Dennett suggests, are “composed of interactions between a host of subsystems none of which understand a thing by themselves” (439).
The difference between Searle’s conclusion and Dennett’s systems approach is clear. For Dennett, the right quantitative complexity (“more of the same”) of syntactical operations can account for semantics – and these operations can be instantiated in a biological system or a sufficiently complex artificial system. For Searle, syntax of whatever quantity and complexity cannot provide a sufficient condition for semantics, and the answer has to be in the biology. “But in addition to the level of the neurophysiology, and the level of intentionality, we don’t need to suppose there is another level; a level of digital computational processes” (1984, p. 54). “There are brute, blind neurophysiological processes and there is consciousness, but there is nothing else” (1992, p. 228). Of course, one should note, there is plenty of neurobiology in the CR – the individual in the CR does have a brain. One might wonder, then, why the individual doesn’t develop the semantics, since he has what Searle deems necessary to do so.[Note 1]
Searle arrives at his solution not by demonstrating how neurobiology can generate semantics, but by a process of elimination.
The system is composed of physics, syntax, semantics.Semantics is not reducible to syntax (as demonstrated by the CR argument).Semantics cannot explain itself.So semantics must be generated in the physics – the neurophysiology.
The CR argument accomplishes what Searle intends it to accomplish, that is, it shows that intentionality cannot be reduced to the workings of a syntactic program. But it leads him, I suggest, to an oversimplified conception of the cognitive system because in constructing the CR, he accepts the definition of the system generally offered by a cognitive science strongly inspired by strong AI.
An Expanded System
Searle’s oversimplification of the system is tied to the fact that in describing the CR he locks himself in (“suppose that I’m locked in a room...” [1981, p. 284]). The Chinese room imposes a certain isolation on its occupant. The walls of the room are, to borrow a term from Rawls and a completely different context, a “veil of ignorance” drawn between the occupant and the exterior world. “Suppose that unknown to you [the occupant] the symbols passed into the room are called ‘questions’ by the people outside the room...” (Searle 1984, p. 32, emphasis added). I want to suggest that when Searle himself occupies the CR the veil of ignorance extends even to knowledge of his own theories! Are there not resources within Searle’s own philosophy to work out a more adequate systems approach? Searle’s CR argument is seemingly made in complete isolation from his theories of speech acts and intentionality, and in regard to the latter, specifically the concept of the “Background” of intentionality (Searle, 1983, 1992).
The Background contains “certain fundamental ways of doing things and certain sorts of know-how about the way things work...” (1983, p. 20). The Background, Searle insists, is presupposed by intentionality, and the implications of this fact are far reaching. “Without the Background there could be no perception, action, memory, i.e., there could be no such Intentional states.... [T]he Background provides necessary but not sufficient conditions for understanding, believing, desiring, intending, etc., and in that sense it is enabling and not determining” (1983, 151–152, 158).
Living one’s life in the Chinese Room, which is a small and non-Chinese world, constrains, limits, or more precisely excludes the relevant Chinese Background. Specifically the occupant’s capacities for action and interaction, including linguistic activity, are limited. Indeed, there is a complete lack of social interactions and shared experiences normally required for acquiring one’s first language, or becoming fluent in a second language. If one goes into the CR without first having a language, one would never get a language. Even if one does have language, as the occupant has English, there is no translation mechanism in Searle’s CR between English-intentionality and Chinese-intentionality, and certainly no social interaction in Chinese culture – no Chinese intersubjectivity.
Searle in the CR is locked in an artificially impoverished environment that excludes social relations that would help to make sense out of the Chinese language. This experimental design helps to make a narrow point: syntax is not sufficient for semantics. But when Searle goes on to address the problem of semantics, he still seems to be locked inside the CR since he considers only those elements that he had put into the room to begin with: it can’t be the syntax, so it must be the neurophysiology. Searle’s account of semantics as an emergent feature of human neurobiology ignores his own more complex account of intentionality and Background. If, according to Searle, intentionality “is that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world “ (1983, p. 1, emphasis added), then the Chinese room locks out Chinese intentionality. Fodor is right to remark that “Searle gives no clue as to why he thinks the biochemistry is important for intentionality and prima facie, the idea that what counts is how the organism is connected to the world seems far more plausible” (1991, p. 521).
The door is now open to a more adequate conception of the system in Searle’s response to the systems reply. The occupant internalizes the syntactic rules and the Chinese characters and then unlocks the door: “We can even get rid of the room and suppose he works outdoors” (1981, p. 289). If the “outdoors” consists of the Chinese outdoors – action and interaction in a Chinese culture – the Chinese-Background – could Searle continue to claim that “he understands nothing of the Chinese, and a fortiori neither does the system...” (289). Rather, the person’s responses would soon become genuine, contextualized Chinese speech acts, as they do when someone learns Chinese by the immersion method.
A more adequate systems approach keeps in mind the artificiality and oversimplification of the CR. The complete system involves a complexity that includes but goes beyond the internal complexities of brain physiology and syntax. It includes the external complexities of the physical and social environment, cultural traditions, and the intersubjective interaction that can only be realized in embodied practices, contextualized speech acts, and developing narratives in the world.
The Internalist Objection to the Expanded System
Searle would most likely reply[Note 2] that all of these extra-syntactical elements that make up the Background enter into the system by way of neurophysiology. Thus, “when we describe a man as having an unconscious belief, we are describing an occurrent neurophysiology. … The occurrent ontology of those parts of the Network that are unconscious is that of a neuro-physiological capacity, but the Background consists entirely in such capacities” (1992, p. 188). Searle seemingly shuts the door to any escape from the CR, just when we found a key that would seem to unlock a solution. He reverts to the isolation of the CR, and specifically to a very close cousin in the world of thought experiments, the brain in the vat. At the same time that he has much to say about the Background, he also says:
Even if I am a brain in a vat–that is, even if all of my perceptions and actions in the world are hallucinations, and the conditions of satisfaction of all my externally referring Intentional states are, in fact, unsatisfied–nonetheless, I do have the Intentional content that I have, and thus I necessarily have exactly the same Background that I would have if I were not a brain in a vat and had that particular Intentional content. That I have a certain set of Intentional states and that I have a Background do not logically require that I be in fact in certain relations to the world around me … (1983, p. 154).
Searle’s internalist position keeps him locked up in the CR, locked into his conclusions, and notwithstanding his work on intentionality and the Background, immersed in a vat full of neurochemicals rather than in the world.
The brain is all we have for the purpose of representing the world to ourselves and everything we can use must be inside the brain.... Each of our beliefs must be possible for a being who is a brain in a vat because each of us is precisely a brain in a vat; the vat is a skull and the ‘messages’ coming in are coming in by way of impacts on the nervous system (1983, p. 230).
My own view (and in this I think I do depart from Wittgenstein) is that ultimately our explanations of these [Background] capacities will be biological. That is to say, the existence of Intentional states is explained by the fact that we are creatures with the certain sort of neurophysiological structure, and certain sorts of biological capacities (1991, p. 293; see 1992, p. 188).
Yet Searle does go on to admit that “I could not, as a matter of empirical fact, have the Background that I do have without a specific biological history and a specific set of social relations to other people and physical relations to natural objects and artifacts” (Ibid.).
Turning the Key
Let’s say yes to the wonderful complexity of the brain. But brain complexity doesn’t come in a vat – neither ontogenetically nor phylogenetically. It comes from the brain being in a body which is in an environment which is social as well as physical. The communication that gives rise to semantics is not a communication on paper through slots, or bits of information conducted by neurons, but a communication through embodied practices – gestures, facial expressions, movements, actions and interactions, speech-acts, narratives, building cultures, building backgrounds, and so forth.
If we liberate Searle from the confines of the CR, and the CR argument, if we open the door to the “outdoors,” the Chinese outdoors, then Searle will not be able to say that “he understands nothing of the Chinese, and a fortiori neither does the system...” (1981, 289). Liberated from the Chinese room, put into a Chinese context, Searle would navigate his way into a cultural and linguistic world, a world of Chinese traditions and social meaning, and equipped with his own English and with the syntax and characters relevant to Chinese, he would be able to see the actions that would follow from the delivery of his syntactically constructed Chinese answers. In effect, his delivery of answers would then constitute genuine, contextualized speech acts, and in short order he would come to understand something in Chinese.[Note 3]
According to a larger version of this argument (see Gallagher 2004), the cognitive sciences run the risk of creating abstract and oversimplified paradigms unless they recognize the complications introduced by what Howard Gardner calls the “murky concepts” of affect, context, culture, and history (1985, p. 42). These are hermeneutical factors that transcend physiological or syntactical performance and yet operate as necessary conditions for human cognition. The term ‘murky’ signals an objection. Once we open the door to murky hermeneutical factors, the objection might run, don’t we run the risk of making cognitive science less scientific? But when did science ever make progress by shutting its eyes, locking the door, and ignoring unavoidable facts? Indeed, cognitive science would make itself less scientific by denying the effects of such hermeneutical factors, and this is precisely what it does when it opts for oversimplified, reductionistic theories.
I am not suggesting that the neurosciences give up their natural-science status and become more hermeneutical. I’m not even sure what that would mean. I am suggesting, however, that the cognitive sciences do define a unique and complex area of research that requires something more than the natural science procedures that involve explanation and prediction in causal terms at the lowest (most reduced) level of analysis.[Note 4] Certain conditions of cognition – the hermeneutical factors of culture, language, and social interaction – cannot be completely reduced to either computational operations or neurophysiological processes.
References
Boden M. (1990) Escaping from the Chinese Room. In: M. A. Boden (eds.) The Philosophy of Artificial Intelligence, New York: Oxford University Press.
Cole D. (1984) Thought and thought experiments. Philosophical Studies 45: 431–444.
Copeland J. (2002) The Chinese Room from a logical point of view. In: J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, New York: Oxford University Press.
Crane T. (2003) The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation, London: Routledge.
Dennett D. C. (1991) Consciousness Explained, Boston: Little, Brown and Company.
Fodor J. A. (1991) Searle on what only brains can do. In: D. M. Rosenthal (ed.) The Nature of Mind, Oxford: Oxford University Press.
Gallagher S. (2004) Hermeneutics and the cognitive sciences. Journal of Consciousness Studies 11(10–11): 162–174.
Gardner H. (1985) The Mind’s New Science: A History of the Cognitive Revolution, New York: Basic Books.
Harnad S. (1989) Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1: 5–25.
Harnad S. (2002) Minds, machines, and Searle 2: What’s right and wrong about the Chinese Room argument. In: J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, New York: Oxford University Press.
Haugeland J. (2002) Syntax, semantics, physics. In: J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, New York: Oxford University Press.
Marcel A. J. (1988) Electrophysiology and meaning in cognitive science and dynamic psychology: Comments on ‘Unconscious conflict: A convergent psychodynamic and electrophysiological approach’. In: M. J. Horowitz (ed.) Psychodynamics and Cognition, Chicago: Chicago University Press.
Maudlin T. (1989) Computation and consciousness. Journal of Philosophy 86: 407–432.
Rey G. (1986) What’s really going on in Searle’s “Chinese Room.” Philosophical Studies 50: 169–185.
Searle J. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417–457.
Searle J. (1981) Minds, brains, and programs. In: J. Haugeland (ed.) Mind Designs Montgomery VT: Bradford Books.
Searle J. (1983) Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press.
Searle J. (1984) Minds, Brains and Science, Cambridge MA: Harvard University Press.
Searle J. (1991) Response: The background of Intentionality and action. In: E. Lepore and R. Van Gulick (eds.) John Searle and his Critics: 289–299. Oxford: Basil Blackwell.
Searle J. (1992) The Rediscovery of the Mind. Cambridge MA: MIT Press.
Wieger L. (1965) Chinese Characters: Their Origin, Etymology, History, Classification and Signification. A Thorough Study from Chinese Documents (Trans.) L. Davrout. New York: Dover/Paragon.
Endnotes
1
Dennett notes that “the differences in a brain whose native language is Chinese rather than English would account for huge differences in the competence of that brain, instantly recognized in behavior, and significant in many experimental contexts” (1991, 209–210).
2
And has replied in this way at a conference where I presented an earlier version of this paper, Backgrounding: Fromthe BodyofKnowledge to theKnowing Body. Interuniversity Centre Dubrovnik, Croatia (5–7 October 2007).
3
Tim Crane (2003) argues that “...if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean.” (125). Crane appears to end with a version of the Robot Reply: “Searle’s argument itself begs the question by (in effect) just denying the central thesis of AI – that thinking is formal symbol manipulation. But Searle’s assumption, nonetheless, seems to me correct …the proper response to Searle’s argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But if you let the outside world have some impact on the room, meaning or ‘semantics’ might begin to get a foothold. But of course, this concedes that thinking cannot be simply symbol manipulation.” (127).
4
On this and related issues, see Marcel (1988).
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/4208 on 2017-08-03 · Publication curated by Alexander Riegler