Do Machines Think Like We Do?
The debate on artificial intelligence (AI) centers around the following question: Would an appropriately programmed computer with the right inputs and outputs simulate a mind or actually have a mind? The first version is uncontroversial. But the second version, called strong AI, would have enormous repercussions if it were true. Certainly it could upset the arguments concerning mental, immaterial minds that I am using in this book. So let us look more closely into strong AI.
The declarations of the first wave of AI researchers reflected strong AI. For example, in 1955 AI founder, Herbert Simon declared that:
there are now in the world machines that think, that learn and create…[We can explain] how a system composed of matter can have the properties of mind.[1]
John Haugeland, an influential voice in cognitive science, wrote in 1983 that “we are, at root, computers ourselves.”[2] Specifically, strong AI represents the view that suitably programmed computers can understand natural language and have other mental capabilities similar to the humans whose abilities they mimic in playing chess or using language.
From the last decades of the twentieth century the arguments between those who were opposed to the concept that machines can think and the strong AI visionaries of technology were hotly debated. In January 1990, the popular monthly Scientific American took the debate to a general scientific audience. One of the main attacks against strong AI assertions comes from the Berkeley philosopher John Searle.[3] He introduced what is widely known now as the “Chinese Room Argument”—first presented in 1983.[4]
Searle’s Chinese Room argument can be summarized as follows:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.[5]
The question Searle poses is: does the machine understand Chinese? Or is it merely simulating the ability to understand Chinese? If you can carry on an intelligent conversation with an unknown partner, does this imply that your statements are understood? Strong AI claims that the ability to converse shows understanding; while in contrast Searle argues that we cannot describe what the machine is doing as “thinking.” Since it does not think, it does not have a “mind” in anything like the normal sense of the word. Therefore, he concludes, strong AI is mistaken.
What Searle denotes by “understanding” is what philosophers call intentionality. Intentionality is the property of being about something, of having content. In the nineteenth century, psychologist Franz Brentano reintroduced this term from scholastic philosophy and held that intentionality was the “mark of the mental.”[6] The term was also widely used by the philosopher Edmund Husserl. Then Gottlob Frege made it a standard practice in analytic philosophy to investigate the intentional structure of human thought by inquiring into the logical structure of the language used by speakers to express it or to ascribe it to others.[7] In other words, beliefs are intentional states—meaning they have propositional content, such as “one believes that in this case the given proposition is true.”
Needless to say, Searle’s argument has been attacked from many sides. I won’t go into those discussions here. However, Searle’s thought experiment appeals to our strong intuition that what the theoretical computer in the “Chinese Room” accomplishes, does not amount to understanding Chinese. The consensus now appears to be that Searle is quite right on this point—no matter how you program a computer; it will not be a mind and will not understand natural language. This has not constituted a final defeat for strong AI which appears to live on in the pursuit of the “singularity”—defined as the point where nonbiological intelligence will match and exceed the range and subtlety of human intelligence. But I have not seen a defeat of the Chinese Room argument.
[1] Quoted in Russell, Stuart J., and Norvig, Peter, Artificial Intelligence: A Modern Approach (2nd ed.), (Upper Saddle River,NJ: Prentice Hall, 2003), 21, 17.
[2] Haugeland, John (1985), Artificial Intelligence: The Very Idea, (Cambridge, MA: MIT Press. 1985), 2.
[3] See his Intentionality: An Essay on the Philosophy of Mind, (Cambridge: Cambridge University Press, 1983) and his The Rediscovery of Mind, (Cambridge, MA: The MIT Press, 1992).
[4] Searle, John, “Minds, Brains and Programs,” Behavioral and Brain Sciences, 3 (1983): 417–457.
[5] “The Chinese Room,” in R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences, (Cambridge, MA: MIT Press, 1999).
[6] Brentano’s Irreducibility Thesis claims that intentionality is an irreducible feature of mental phenomena, and since no physical phenomena could exhibit it, mental phenomena could not be a species of physical phenomena.
[7] Frege described the following puzzle (1892): How can one rationally hold two distinct singular beliefs that are both about one and the same object?