The Text
The issue at stake is how we think about – and then how we act toward members of our own kind of being, human beings. Are we immortals, created and redeemed to be with God Himself forever and ever, or are we mere machines?
In my philosophy courses, especially in my ethics and bioethics courses I am insistent in arguing that everything hinges on our understanding of what kind of being we know ourselves and our fellow human beings to be. For example, do we think of ourselves in the rich, ennobling terms of Psalm 8 and Hebrews 2 as members of God’s species by virtue of the Incarnation – or do we operate on the assumption that we are nothing more than transient biological machines?
Pause here. Read Psalm 8 and Hebrews 2. Read also Philippians and John’s Gospel for extra credit(!). See my study of Luther’s 1636 Disputation Concerning Man as a PDF at www.LutheranPhilosopher.com or as a video at our Concordia Bible Institute, http://www.concordiabible.org/.
(a) The text for our Master Metaphor is Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457.
There are many later versions of the Chinese Room, but I recommend starting with the 1980 paper and lingering there for a long while. There is a draft version of this initial version of Searle’s Chinese room available at www.cogprints.org/7150/1/10.1.1.83.5248.pdf. It’s interesting writing and is also quite accessible and entertaining. Perhaps you’ll discover as I have that much of the writing about Searle’s Chinese Room misses or seriously misrepresents Searle’s actual text!
The Background
First, Thomas Hobbe’s 17th-century philosophical view of the human being as a complicated clockwork.
Hobbes’ 1651 Leviathan (revised Latin edition, 1668):
THE INTRODUCTION
Nature (the art whereby God hath made and governes the world) is by the art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the begining whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life? For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man. For by Art is created that great LEVIATHAN called a COMMON-WEALTH, or STATE, (in latine CIVITAS) which is but an Artificiall Man; though of greater stature and strength than the Naturall, for whose protection and defence it was intended; and in which, the Soveraignty is an Artificiall Soul, as giving life and motion to the whole body; The Magistrates, and other Officers of Judicature and Execution, artificiall Joynts; […]
For a philosophical evaluation of the Turing Test (1950) see the SEP at
http://plato.stanford.edu/entries/turing-test/
For an introduction to a 21st-century pop culture Turing Test see the interview with the director of the movie Ex Machina at
http://www.bloomberg.com/news/videos/2015-05-15/-ex-machina-charlie-rose-05-15-
Note: This is a good spot to talk about reasoning analogically since Hobbes and Turing and folks who follow their example usually tell us that human beings are nothing more than complex clockwork machines or computers. We ought to pause and ask whether this dominant modern assumption about human beings is valid reasoning or simply a popular metaphor. For my introduction to reasoning analogically, please see my Logic 101 - Types of Reasoning, for Biblical Christians at www.issuesetc.org.
Analysis
First, there is the matter of his assumption that human beings are machines. This too calls for a more careful attention to reasoning analogically (just how tight is the analogy between machine and human being, really?).
Be sure to grapple with William Dembski’s 1999 First Things article, Are We Spiritual Machines? at http://www.firstthings.com/article/1999/10/are-we-spiritual-machines.
Here is my lightly-annotated short version of Demski:
Are We Spiritual Machines?
William A. Dembski
Copyright (c) 1999 First Things 96 (October 1999): 25-31.
For two hundred years materialist philosophers have argued that man is some sort of machine. The claim began with French materialists of the Enlightenment such as Pierre Cabanis, Julien La Mettrie, and Baron d’Holbach (La Mettrie even wrote a book titled Man the Machine). Likewise contemporary materialists like Marvin Minsky, Daniel Dennett, and Patricia Churchland claim that the motions and modifications of matter are sufficient to account for all human experiences, even our interior and cognitive ones. Whereas the Enlightenment philosophes might have thought of humans in terms of gear mechanisms and fluid flows, contemporary materialists think of humans in terms of neurological systems and computational devices. The idiom has been updated, but the underlying impulse to reduce mind to matter remains unchanged.
Materialism remains unsatisfying, however; it seems inadequate to explain our deeper selves.
[...]
Not so for the tender–minded materialists of our age. Though firmly committed to materialism, they are just as firmly committed to not missing out on the benefits ascribed to religious experience. They believe spiritual materialism is now possible, from which it follows that we are spiritual machines. The juxtaposition of spirit and mechanism, which previously would have been regarded as an oxymoron, is now said to constitute a profound insight.
Consider Ray Kurzweil’s recent The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Viking, 1999). Kurzweil is a leader in artificial intelligence, specifically in the field of voice–recognition software. Ten years ago he published the more modestly titled The Age of Intelligent Machines, where he gave the standard strong artificial intelligence position about machine and human intelligence being functionally equivalent. In The Age of Spiritual Machines, however, Kurzweil’s aim is no longer to show that machines are merely capable of human capacities. Rather, his aim is to show that machines are capable of vastly outstripping human capacities and will do so within the next thirty years.
According to The Age of Spiritual Machines, machine intelligence is the next great step in the evolution of intelligence. That man is the most intelligent being at the moment is simply an accident of natural history.
[...]
[H]umans are not spiritual machines. Even so, it is interesting to ask what it would mean for a machine to be spiritual. My immediate aim, therefore, is not to refute the claim that humans are spiritual machines, but to show that any spirituality of machines could only be an impoverished spirituality. It’s rather like talking about "free prisoners." Whatever else freedom might mean here, it doesn’t mean freedom to leave the prison.
By a machine we normally mean an integrated system of parts that function together to accomplish some purpose. To avoid the troubled waters of teleology, let us bracket the question of purpose. In that case we can define a machine as any integrated system of parts whose motions and modifications entirely characterize the system. Implicit in this definition is that all the parts are physical. Consequently a machine is fully determined by the constitution, dynamics, and interrelationships of its physical parts.
This definition is very general. It incorporates artifacts as well as organisms. Because the nineteenth–century Romanticism that separates organisms from machines is still with us, many people shy away from calling organisms machines. But organisms are as much integrated systems of physical parts as are artifacts. Perhaps "integrated physical systems" would be more precise, but "machines" emphasizes the strict absence of extra–material factors from such systems, and it is that absence which is the point of controversy.
Because machines are integrated systems of parts, they are subject to what I call the replacement principle. This means that physically indistinguishable parts of a machine can be exchanged without altering the machine. At the subatomic level, particles in the same quantum state can be exchanged without altering the subatomic system. At the biochemical level, polynucleotides with the same length and sequence specificity can be exchanged without altering the biochemical system. At the organismal level, identical organs can be exchanged without altering the biological system. At the level of human contrivances, identical components can be exchanged without altering the contrivance.
The replacement principle is relevant here because it implies that machines have no substantive history. As Hilaire Belloc put it, "To comprehend the history of a thing is to unlock the mysteries of its present, and more, to disclose the profundities of its future." But a machine, properly speaking, has no history. What happened to it yesterday is irrelevant; it could easily have been different without altering the machine. If something is a machine, then according to the replacement principle it and a replica of it are identical. Forgeries of the present become masterpieces of the past if the forgeries are good enough. This may not be a problem for art dealers, but it does become a problem when the machines in question are ourselves.
For a machine, all that it is is what it is at this moment. We typically think of our pasts as either remembered or forgotten, and if forgotten then having the possibility of recovery. But machines do not, properly speaking, remember or forget; they only access or fail to access items in storage. What’s more, if they fail to access an item, it’s either because the retrieval mechanism failed or because the item was erased. Consequently, items that represent past occurrences but were later erased are, as far as the machine is concerned, just as though they never happened. Mutatis mutandis, items that represent counterfactual occurrences (i.e., things that never happened) but which are accessible can be, as far as the machine is concerned, just as though they did happen.
The causal history leading up to a machine is strictly an accidental feature of it.
Consequently, any dispositions we ascribe to a machine (e.g., goodness, morality, virtue, and, yes, even spirituality) properly pertain only to its current state and possible future ones, but not to its past.
Second, there is Searle’s understanding of intentionality. Reviewers and critics – particularly artificial intelligence theorists and engineers who dislike the deflationary outcome of Searle’s CRA – tend to ignore or misrepresent intentionality. Notwithstanding their ignorance, see the Abstract for Searle’s article. CRA is all about intentionality.
For example, Searle writes,
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.
[…]
Precisely that feature of AI that seemed so appealing -- the distinction between the program and the realization -- proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective psychology or neurophysiology of the brain. But the equation, "mind is to brain as program is to hardware" breaks down at several points among them the following three:
First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, Ch. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place -- only something that has the same causal powers as brains can have intentionality -- and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese.
Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems.
Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer.
What's Right and Wrong About the Chinese Room Argument.
The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!): Normally, if someone claims that an entity -- any entity -- is in a mental state (has a mind), there is no way I can confirm or disconfirm it. This is the "other minds" problem.
-- at http://cogprints.org/1622/
Intentionality
Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not.
Intentionality: That a feeling, emotion, or mood is about something; its objectivity. A mood such as Angst is about the world as a whole, the undefined world in which an individual is situated. This situatedness is immediate and is not reducible either to cognition or to volition. The “location” of intentionality is best understood as a spatio-temporal field of consciousness and intersubjective experience. |