Imagine that you are locked up in a room with no windows, save for two slits cut out in one of the walls. One slit is marked “INPUT”, and another is marked “OUTPUT”. There are countless books all around you and an instruction sheet on the wall. It tells you that you are about to receive some notes through the INPUT slit, and that once you do, you are to flip through the books deposited all around you to locate a correct response to what is written on the note. As an analogy, imagine that written on an INPUT note is the string of symbols “^%$@&&”. Your job is to flip through the books to locate this specific string of symbols and its indicated response, which, for example, might be “&#$!++”, and then proceed to write them on a separate slip of paper which you will pass through the OUTPUT slit.
Such was The Chinese Room Argument, a thought experiment* posited by John Searle in 1980, except that what the participant would be dealing with are Chinese characters instead of keyboard symbols.1 Searle’s person-in-the-room would be answering questions written in Chinese posed by real Chinese people outside the room. Even though this person-in-the-room doesn’t understand a word of Chinese, he would be able to deceive the people outside the room into thinking that he does by delivering accurate responses derived from the books and manuals around him.
In the above picture, the note going into the INPUT slit has the Chinese characters meaning “Where are you from?” written on it. The response, which comes out through the OUTPUT slit, has the Chinese characters “I am from China.” written on it.
Searle’s thought experiment serves as a springboard to answering the questions: Can machines think? Can they understand the meaning of words? Evidently, he thought not, for like the person-in-the-room, computers operate the same way. They are programmed to recognize a sequence of commands, and to respond accordingly with what their encoded programmes and algorithms instruct. Surely they don’t use words in the same way that a normal human would!
But tackling those questions means that we would need to delve into what language is all about. What makes a language, a language? What do we mean when we make certain utterances? Or what is meaning in the first place? Gottlob Frege thought that meaning was a dual composition of sense and reference.2 Hilary Putnam advocated against a psychological theory of meaning, that “meanings just ain’t in the head” and are mostly determined by one’s external environment.3
(from left to right: Gottlob Frege, Hilary Putnam, Ludwig Wittgenstein)
And then there’s Ludwig Wittgenstein, whose theory of language is encapsulated in three words: Meaning is Use. That is, we know the meanings of words insofar as we know how to use them.
“For a large class of cases – though not for all – in which we employ the word “meaning”, it can be defined thus: the meaning of a word is its use in the language.” 4
Much of his theory is concerned with how we come to know the meanings of words. We know them through being taught how to use them, through playing what he calls language games, and not by being recited dictionary definitions of a particular word. Take the word “five”, for example. How does one teach a child the meaning of “five”? Surely it is not to tell her that it is an abstract numerical term. Rather, we engage in language games, using things like wooden blocks and fingers, to illustrate what the word “five” represents – five fingers, five blocks, five apples. We teach it to her through demonstrating its use.
Is Wittgenstein correct? If it is true that Meaning is Use, then it seems that it is not the case that computers do not understand the meaning of words. They are engaging in the use of language by recognising how certain letters, symbols, or characters are used in response to others. We acknowledge that even though these machines demonstrate an understanding based on mere programming and computation, it is an understanding of meaning nonetheless. Such rule-governed activity is not very much different from how humans have come to know language and meaning too.
If it is true that computers can successfully master the human language, then what exactly do computers need in order to learn and understand the meaning in language?
*A thought experiment is carried out using the faculty of imagination – to imagine how things would be if they occurred in a certain way or manner. Although it is not a proper means of formal scientific investigation, engaging in thought experiments enables one to consider and deliberate about the repercussions that might result from certain hypothetical situations. As you have seen, the Chinese Room Argument is one such example. If you are interested, you may visit this page for other interesting examples of thought experiments.