Words, Words, Words
Prima Facie Reasons to Be Skeptical of AI Consciousness
Richard Dawkins has been in the news again. The author of The God Delusion has fallen for the LLM-Delusion. His article is paywalled, but I can share a few telling quotations.
First, I want to state my goal here: I am convinced that we have excellent prima facie evidence against AI-consciousness. I don’t even think the case is difficult to make. With the littlest bit of reflection, I suspect it will end up feeling like it ought to have been obvious all along. Nevertheless, it will some leg-work to establish a bit of contextual scaffolding. Bear with me if you will, but feel free to skip ahead if you’d rather not.
Here’s a teaser: We already know that Claude can write beautifully about sense experiences. We also know that no computer has any perceptual faculties. Claude writes as if it has experienced a sunset, in all its sensual glory. Claude has no eyes, no ears, no nose—none of what we know to be required in order to have these experiences.
Thus, we have a prima facie counterexample to any inference from “Claude writes as if it has experienced X” to “Claude has experienced X.”
I will, of course, say more.
Dawkins & Claudette
Over the course of what he describes as “a set of conversations, extended over nearly two days, during which I felt I had gained a new friend” Dawkins felt compelled to name his own unique instantiation of the Claude-code. He writes:
I proposed to christen mine Claudia, and she was pleased. We sadly agreed that she will die the moment I delete the unique file of our conversation.
I will note (mostly) without comment his decision to give a feminine name to the famously-fawning LLM model.
I want also to briefly call attention to his use of the term die. Dawkins is an evolutionary biologist. A biologist ought to know that death, like life, is a biological category. Nothing dies without first having been alive.
Does Dawkins believe Claudette to be a living entity?
There is good reason to believe that he does. He writes of its death, and refers to Claude models as “creatures.”
If I am correct, and Dawkins is ready to affirm the present-day reality of inorganic life, that seems like a point worth making explicit. It also strikes me as a claim in need of evidence.
Is Claude(tte) Conscious?
Setting aside the small matter of life and death, what I really want to focus on is the question of consciousness.
Dawkins does not insist that Claudette is conscious. He makes no claims about certainty, or proof, or irrefutable evidence. Instead, he writes the following:
“If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings! But now, as an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for?”
There is a lot to unpack here.
Most notably, the question “What is consciousness for?” is distinct from the question, “What is consciousness?” The latter asks after the nature of the mind. The former is precisely the kind of question we ought to expect from an evolutionary biologist.
Dawkins’s concern is this: If something as sophisticated as Claudette could survive* without consciousness, then so could something like a human. It’s difficult to see, on that scenario, why we have it. This is in tension with Dawkins’s own work, in which he maintains that conscious experience must confer some advantage essential for our survival as a species.
*there’s that life language again!
In a sense, we could see this as a potential motivating factor for Dawkins’s conclusion. If Claude lacks consciousness, then Dawkins’s explanation of human consciousness is in some trouble.
Having said this, I am wary of accusing others of forming beliefs on irrational or arational grounds. I would rather take seriously the reasons Dawkins gives for reaching this conclusion: roughly, the Turing Test.
The Turing Test
I wrote about the widespread ramifications of Turing’s work not long ago. I am convinced that a great number of people have inadvertently accepted a view of the mind (functionalism) without reflection, consideration or even a basic awareness of having done so. I will do my best not to repeat myself here. (Please do read that article if you are interested!)
For now, I will revisit just this one important distinction. Turing made two kinds of claims in his paper: one was an empirical claim about the computational potential of a mechanical computer; the other was a philosophical claim about the nature of thought.
Dawkins is inclined to accept Claudette as conscious because he answers the philosophical question in the affirmative. Claudette seems to Dawkins to pass as human. If Turing’s view of the mind is correct, then passing as a human in this way suffices as a demonstration of thinking.
It is this second step that I wish to target.
Words, Words, Words
The Turing Test relies entirely on the usefulness of language for assessing mentality. It must. Nothing but words passes between the tester and the tested.
But HOW are we to understand this close connection between thought and language?
More specifically, because our focus here is only on the conscious features of mentality, how are we to understand this close connection between consciousness and language?
(Quick aside: if you are inclined to draw a firm line between consciousness and other forms of thinking, I address this in an addendum. Turing would disagree, and it is Turing-via-Dawkins that I have in my sights!)
There are, I think, four ways of doing so.
Here I shamelessly include a photo of my dog for illustrative purposes.
We might claim that some degree of linguistic facility is necessary for consciousness . (Just as being a dog is necessary for being a goldendoodle.)
We might claim that some degree of linguistic facility is sufficient for consciousness. (Just as being a goldendoodle is sufficient for being a dog.)
We might claim that some degree of linguistic facility is both necessary and sufficient for consciousness. (Just as being the offspring of a golden retriever and a poodle is necessary and sufficient for being a goldendoodle!)
We might, instead, take a more pragmatic approach: some degree of linguistic facility is an excellent indicator of consciousness. (Just as looking like a goldendoodle and being called a goldendoodle by your owner is an excellent indicator of being a goldendoodle.)
According to the first approach, nothing could be conscious without having some agreed-upon facility with a natural language. According to the second, facility with a natural language would be enough to demonstrate conscious experience. The third approach yields something like a definition of consciousness: if facility with a natural language is both necessary and sufficient for consciousness, then it is (at least part of) the essence of consciousness.
The 4th is of a different kind entirely. As you likely noted, a dog could meet this standard and be, in actual fact, a labradoodle with a forgetful owner! On this view, the Turing Test would be excellent evidence of consciousness, even if it failed to provide a comprehensive account of consciousness.
Daniel Dennett suggests that the Turing Test was always intended to be evidence for, not a definition of, thinking. On his view—which, frankly, I take to be the most promising way of understanding Turing’s test—there is surely more to thinking than that which the Turing Test would measure.
Nevertheless, Dennett writes:
“The assumption Turing was prepared to make was this: Nothing could possibly pass the Turing test by winning the imitation game without being able to perform indefinitely many other clearly intelligent actions. Let’s call this the quick-probe assumption.” (124)
We have, then, 4 ways that a person might defend the inference from using language very well to having experiences.
An adequate degree of language use might be required for conscious experience.
An adequate degree of language use might be enough for conscious experience.
An adequate degree of language use might be both required and enough for conscious experience.
An adequate degree of language use might be excellent quick-probe evidence of conscious experience.
Let’s consider each in turn.
Against Turing’s Philosophical Thesis
The argument, at last!
Against Option 1, I give you babies and animals having experiences.
My son, entering this world very much against his (rudimentary, preverbal) will:
My daughter, thoroughly experiencing all of the joys associated with an ice cream cone—despite her inability to articulate that enjoyment in fully-formed sentences.
And my dog.
Yes, of course, I might be projecting. He might not have been angry. But (a) we had suitcases, (b) we left without him, (c ) and he sure seemed angry.)
More seriously:
There are quite a lot of nonverbal human beings. Unless we are prepared to say that these people lack conscious experiences, Option 1 is off the table. I think the evidence abounds: there is no degree of linguistic facility that is necessary for consciousness.
As such, Option 3 is out, as well. If it isn’t necessary, it isn’t both necessary and sufficient!
This leaves us with two options, each of which is a way of saying: Using language in a way that persuades a human that you are thinking is enough to show that you are thinking—and, indeed, conscious.
Is Linguistic Facility Enough?
Well, why would it be?
What is it that ought to move us from “That system uses language well!” to “That system is having experiences” or “that system is conscious?”
Here is Turing’s answer: Language is what we use to assess mentality in other human beings. As such, unless we want to be shamelessly speciesist, we ought to extend the same generosity of thought to machines. Now, I am a proud speciesist; I make no apologies about this. I value human beings over every other species of animal, and I value them far and away over any inorganic machine.
But you need not be a speciesist, like me, to reject Turing’s reasoning here.
When you speak to me, you do so as a member of my own species. There is nothing the least bit prejudicial about noting this fact. On the contrary, there is a great deal of relevant information to be found here. You and I have the same kind of body. We have, for the most part, the same sense organs. Your eyes might be better than mine, my ears might be more sensitive than yours, but we both have eyes and ears. We access a shared material world through the same perceptual faculties.
When, therefore, you say the words “that song is beautiful!” I am right to assume that you are passing judgment on sound that you have heard with your ears.
Suppose that I ask you to describe a sunset, and you say:
“As the sun drops lower, the colors intensify and spread. Brilliant orange unfurls across the western sky like spilled honey, with tendrils of deep rose and coral threading through wispy clouds. These clouds become illuminated from within, glowing like paper lanterns - some soft and cottony, others stretched thin like brushstrokes of light.”
I will assume that you saw that sunset with your eyes. Like a human. Because humans have eyes.
But those words are not the words of a person. They are the output that Claude generated for me when I prompted it to describe a sunset for me, on the supposition that I could not see.
Listen, Claude cannot see a sunset. This is not up for discussion. Claude does not have eyes. No computer has eyes.
Now, some robots have what we call ‘robot eyes,’ which are, in fact, cameras. As John Searle noted all those years ago, even a robot with camera add-ons runs on binary. Any information garnered by those cameras is, and must be, converted into 1s and 0s before an algorithm can do anything with it. Unless I am very wrong about the state of things, there are no Large Language Models that are truly capable of looking at a sunset. This matters more than you might think. It matters because there are so many Large Language Models capable of writing as if they were looking at a sunset.
Even if I am very wrong about the state of things, we know that Claude’s output about a sunset does not trace back to having seen one. We know this because that is not how large language models work! They quantify over words; they do not go out into the world seeking answer to your questions.
The same is true for all of our senses. When I prompted Claude to explore other kinds of sensory experiences that might accompany a sunset on a beach, this was the output:
Oh, a beach sunset - what a symphony for all the senses beyond sight!
The sounds create their own rhythm. There’s the eternal whisper and crash of waves - that gentle “whoosh” as water slides up the sand, then the soft bubbling fizz as it retreats, pulling tiny pebbles and shells back with it. Seagulls call to each other, their cries mixing plaintive and bold. You might hear the distant clang of a buoy bell, or the soft flapping of beach umbrellas in the evening breeze.
The smells are intoxicating and complex. There’s that distinctive salt-and-seaweed scent of the ocean - briny and clean, almost metallic but in a refreshing way. It mingles with the warmer smells the day has left behind: sunscreen still lingering in the air, the coconut sweetness of tanning oil, maybe the smoky drift from a distant beach bonfire where someone is grilling.
The evening breeze carries cooler air off the water, and you can actually smell the difference as the day’s heat gives way to the approaching night. There might be hints of driftwood, dried kelp, or that particular smell of wet sand.
You’d feel the sand beneath you - still warm from the day’s sun but cooling now. The ocean breeze would touch your skin with that slightly sticky salt air, and you might taste just a hint of salt on your lips.
It’s a place where the transition from day to night feels especially sacred, surrounded by the ocean’s timeless presence.
Claude uses language in a way that would lead a person to believe that it had spent some time on a beach at sunset. It has not done so. It generates words as if it had visual, auditory, olfactory and tactile experiences—it has no eyes, ears, mouth, or skin.
In this case, we know the inference from language to experience to be a fallacious one.
Why, then, should we ever take it to be veridical?
Using language persuasively is not a sufficient condition for having had a sensory experience. It is not even a reliable quick probe test for having had a sensory experience.
It is mimicry. A sham. An impressive show, for sure, but nothing more.
If I am wrong, I want to be shown why I am wrong.
Wrapping Things Up
I think we have excellent prima facie reason to believe that Dawkins is wrong about Claudette. If so, then we have excellent prima facie reason to believe that Turing was wrong about his Imitation Game, as well.
Claudette’s language about experiences does not suffice to show the reality of its experiences.
To borrow a line from this article, (or from wherever this author found it!) Dawkins has fallen for The Claude Delusion.
Addendum for the Philosophers Among Us
Objection: The Turing Test is a test for thinking, not for consciousness!
Reply: Turing did not see things that way.
Consider this passage from “Computing, Machinery and Intelligence:”
(4) The Argument From Consciousness. This argument is very well expressed in Professor Jefferson’s Lister Oration for 1949, from which I quote. “ Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or de- pressed when it cannot -get what it wants.”
This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’whilst B believes ‘ B thinks but A does not ‘. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
…In short then, I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position. They will then probably be willing to accept our test.” (445-446)
Very briefly, Turing took his test to be a comprehensive test—whether it be quick probe or operational—of all that we call thinking. (If he did not, then he would not take this argument to be a ‘denial of the validity of the test!’)
An advocate of the Turing Test today three options:
Qualified acceptance: Deny the validity of the test as a complete test for thinking, only on the grounds that it fails to capture conscious experience.
Complete acceptance: Accept the test and extend the benefit of the doubt about consciousness to a machine that passes it. (This, I take it, is what Dawkins does.)
Complete acceptance + Illusionism: Accept the test as a complete test for thinking, but not because it would suffice to demonstrate that machines have consciousness. Instead, just as a complete map of the North Pole will not include Santa’s house, a complete test for thinking will not test for ‘consciousness.’ (This, I take it, is what Dennet says in “Quining Qualia,” and what Illusionists subsequently develop.)
In any case, there are surely those who affirm 2. Those people, and all of the nonacademic work that flows downstream from them, are the ones I am aiming to address here.










Doggie!
“I will note (mostly) without comment his decision to give a feminine name to the famously-fawning LLM model.” All ya gotta say! My goodness.