The Chinese Room Thought Experiment: Comprehension and Consciousness
Conscious machines versus zombie computers
This is the last of three posts discussing Philosopher John Searle’s Chinese Room thought experiment. Searle first published the experiment in a 1980 paper, although he later simplified it to a form described in an article he published in 2009.
Searle proposed placing himself in a room in which he’s passed papers with questions written in Chinese, and his task is to answer those questions in Chinese. He doesn’t understand Chinese himself, just English. However, he does have a list of Chinese characters and instructions in English on how to correlate the Chinese characters to the questions in a way that allows him to write down the answers to the questions without understanding the questions or the answers.
In my first post of this series, I focused on Searle’s conjecture that programming a computer could produce a system that was competent at certain human-like tasks but did not actually possess understanding of those tasks. While the task might seem the work of something equivalent to a human mind, he believed it would still lack what he called intentionality — the capacity of the mind to represent objects and affairs in the world, to have internal states that are about or directed towards beliefs, desires, and perceptions of objects, events, or conditions in the world.
In the second post I focused on Searle’s conclusion that not only would such a system lack intentionality, but that any programmed computer system, no matter how powerful, would by its nature also lack intentionality. It could never understand or be aware of what it was doing. In this post I’ll discuss one of his main foundations for this conclusion.
The Biological Foundation of Mind
Searle has developed and refined his theory over the years, but the basics have remained the same. As discussed in the last two posts, underlying his theory is his belief in the primacy of biology to anything that could be said to possess cognition.
While he does consider humans to be machines of a sort, he believes they are a particular kind of machines whose biology is necessary for intentionally, i.e. understanding and awareness. Any duplication of the human mind would thus have to be on a machine that physically duplicates this unknown mechanism buried within the biology of the brain.
Searle stated in a 2014 The New York Review of Books review of two books related to AI:
We do not now know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart.
The difficulty with carrying out the project is that we do not know how human brains create consciousness and human cognitive processes.
So although he acknowledges that we don’t understand much about how the brain works, he is quite confident that whatever is going on is not something that can be simulated — or synthesized — on a computer. As he explained in his original 1980 paper:
Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.
In other words, a mind has to exist on something that replicates the biological nature of the brain. It can’t be recreated on a computer system.
As I mentioned in the first post, such statements comprise an Ipse Dixit fallacy, as Searle provides no proof, logical or empirical, to back them up. They also amount to a Circular Argument fallacy, in that he’s basically stating that you can’t program a computer to create synthesized cognition because there’s more to cognition than what you can program a computer to synthesize.
The Magic in the Machine
Let’s put aside these initial concerns for now and consider the further implications of his theory. The mysterious property of biology that Searle hypothesizes to be responsible for our ability to understand is also the property he believes is responsible for our consciousness. This is why he frequently uses the word intentionality to describe the characteristic of the brain that a computer is unable to replicate. The word intentionality implies not only understanding but also perception and awareness — the fundamental characteristics of consciousness.
So to Searle, the proposal that you can’t create understanding by programming a computer also implies that you can’t create consciousness. In fact, many consider the Chinese Room an argument against synthetic consciousness as much as synthetic cognition.
This isn’t to say, however, that Searle believes that consciousness and understanding are caused by something outside or beyond the brain. Instead, he feels that there’s some physical quality inherent in the brain that computers lack, and it is this physical quality that is responsible for both understanding and the consciousness inextricably bound to it.
Searle has, in fact, stressed that he believes it may be possible to duplicate human cognition in an artificial substrate. He just believes that this substrate will have to replicate whatever underlying character of the brain gives rise to intentionality, and that whatever that is, it cannot be simulated on a computer. From his 1980 paper and 2009 article, respectively:
My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains.
Because we know that all of our cognitive processes are caused by brain processes, it follows trivially that any system which was able to cause cognitive processes would have to have relevant causal powers at least equal to the threshold causal powers of the human brain. It might use some other medium besides neurons, but it would have to be able to duplicate and not just simulate the causal powers of the brain.
One might assume that you could just simulate this physical quality of the brain on a computer as well, but Searle doesn’t think so. To him, the nature of programming and the nature of simulation make this impossible. He termed this belief biological naturalism and described it in detail in a paper of the same name that later appeared in the 2007 book The Blackwell Companion to Consciousness.
The Supernatural Mind
Like Searle, I think that consciousness is integral to understanding and to the intelligence of Artificial General Intelligence. The last part of my functional definition of intelligence is that quality which allows an entity to acquire awareness of its own cognition and of itself as an independent and unique entity distinct from other entities and from its environment.
The idea that intentionality or consciousness can only occur in biological organisms, or at least in a substrate that is in some currently unknown way equivalent to a biological brain, is not unique to Searle. Physicist Roger Penrose is one of the more widely known proponents of the idea that there is some unique aspect to the brain that can’t be replicated on a computer. He first described his conjectures on this in the 1989 book The Emperor’s New Mind.
In that book, Penrose proposed that the mind was not algorithmic in nature and suggested that Gödel's incompleteness theorems were proof that there’s something in the human brain that can’t be replicated on a computer. He used this hypothesis along with the fact that we don’t really have any significant understanding of the mechanisms underlying consciousness to conclude that the human brain uses some sort of quantum mechanical mechanism for consciousness and consequently cognition. He later teamed up with anesthesiologist Stuart Hameroff to suggest that quantum mechanical interactions in the microtubules of the human brain are responsible for consciousness.
Many, including myself, find their arguments unconvincing. Gödel's incompleteness theorems are concerned with mathematical logic and the philosophy of mathematics, and applying them to the human mind is a category error that greatly exceeds the bounds of their applicability. There is also no direct scientific evidence to support the idea that quantum effects are responsible for consciousness or intelligence, nor is there any coherent theory as to how such effects would causally result in either.
There has long been an unfortunate propensity for some to resort to psuedoscientific explanations to explain things for which we currently have no other explanation. There are few areas of inquiry where this applies more than it does to the study of consciousness. Nobody knows what causes us to experience consciousness. No one knows how widespread the experience is as far as other living (or non-living) entities. Consciousness seems very strange to us.
Similarly, quantum mechanics seems very strange to us. Perhaps there is a connection, and someday we’ll discover some quantum mechanistic effect that is responsible for our consciousness. For the last several decades, though, quantum mechanics has been the closest thing we have to “magic” in the realm of science, the go-to explanation for anything that seems inexplicable to us. But grasping at magical explanations to fill the gaps in our knowledge is really just equivalent to admitting that we don’t really have any explanation at all.
The Domains of Consciousness and Understanding
Searle is a philosopher, though, and not a scientist like Penrose. He doesn’t make any attempt to prove his hypothesis or require any evidence to back it up.
Philosophy in general has a long and rich history of exploring the human experience and providing insights into ourselves. and our society. It’s less useful, however, when it comes to discovering the actual mechanisms of the physical universe. More specifically, it frequently runs afoul of the twin fallacies of Appeal to Ignorance and Argument From Incredulity.
The first manifests in the unfortunate tendency of some to assume that because something is currently technically impossible it will always be technically impossible. The second can cause some to assume that because something is currently a mystery to science, it must consequently involve some phenomena that is beyond the reach and realm of science (or at least any currently conceived of science). These are both well-known fallacies, yet people seem to fall prey to them over and over again.
Searle’s Chinese Room thought experiment has proven to be useful for showing that it’s possible to have a computer system that mimics aspects of intelligence without having any real intelligence at all. However, I think Searle’s ideas fall flat when attempting to discount the possibility of creating AGI on a computer.
There simply doesn’t seem to be any evidence or logic in Searle’s philosophical conjectures that preclude replicating on a computer whatever is going on in the human brain. To Searle, the very idea of simulation implies that the thing being simulated is not the real thing. But as I discussed in my last post, Searle’s confounding of computer simulation with computer synthesis is an Equivocation fallacy. The goal of AGI is not simulating intelligence on a computer — it’s synthesizing cognition on a computer.
When it comes to consciousness, Searle is not alone in hypothesizing some mysterious phlogiston-like constituent of the brain. But hypothesizing about phlogiston did not help scientists discover chemistry and the nature of combustion. Discovery of the mechanisms underlying the natural world requires theories that can predict results and experiments to provide those results.
Searle’s asserts that there’s something missing even in an exact simulation of the brain. There are two possibilities here.
The first is that something is going on in the brain beyond what is knowable by science, something that cannot be analyzed or synthesized or reproduced. In other words, something supernatural. If that’s the case, there is not much that science can say about it. This seems unlikely given all the other things that previously seemed beyond what is knowable but are now known.
The second possibility is that something is happening in the brain that is not computable. Penrose hypothesized some kind of currently unknown quantum mechanical effect to explain the mechanism of the mind. But even if some quantum mechanical effect were integral to consciousness and cognition, there doesn’t seem to be any underlying theory as to why this quantum effect couldn’t then be simulated on a quantum computer.
And a quantum computer isn’t even necessary. The field of computability theory is concerned with what can and can’t be computed. There are some types of problems that aren’t computable, and these are typically problems involving infinities, paradoxes, and/or problems that are simply undecidable. But anything that is computable can be computed on what’s called a Turing machines, a simple and idealized form of computer.
All modern computers, including quantum computers, are Turing machines, and all Turing machines are equivalent in what they can compute. This means that any quantum mechanical effect in a quantum computer can be simulated on a regular computer. While there are some types of problems that are intractable on a regular computer, this is because they would simply take too long to compute. It’s not impossible to compute those problems on a regular computer — it’s just impractical. The advantage of a quantum computer would be to tackle such problems in a reasonable amount of time.
So if Searle is correct in his conjecture of biological naturalism and we assume he doesn’t think brains are supernatural, then they are instead doing something beyond the realm of computation. Unfortunately, he provides nothing to suggest how the brain could accomplish what it does in a non-computational, non-algorithmic way.
And so we reach the limits of what can usefully be concluded from Searle’s Chinese Room thought experiment. We are ultimately left somewhat adrift, given no solid mooring in the perplexities that power our minds, and with only the questionable assurance that understanding and consciousness lie on some distant shore that will inexplicably and forever exceed our computational reach.
I can't recall if you've dealt with this in earlier posts, but there seems to be an implicit assumption that true AGI naturally requires that the machine also be self-aware, have consciousness. I am not sure why that needs to be true - I'm willing to believe that a machine can be smarter (or maybe "smarter") than me without being aware of itself as an independent entity.
What would look like real intelligence would be awareness of how people behave, and that's generally easy for us because we are also people. We have all kinds of things we've learned over our lifetimes that we weren't born knowing, like the permanence of death, or the ability to project future consequences of present behavior. I don't see why a machine couldn't also learn those things.
I guess the point is, I'm pretty sure we could mimic a human brain's problem-solving abilities (at least in theory - the energy required might make this a neat hypothetical rather than a useful achievement) without worrying about whether it has emotions or self-awareness.