Why we should be talking about zombie reasoning
everyday talk of AI doing things like reasoning is wrong and risky!
Everywhere I look, I see people referring to AI doing things like reasoning. I see this in casual chat. I see it when I use AI products. And I see it in formal writing like this Nvidia glossary entry:
“What Is AI Reasoning? AI reasoning is how AI systems analyze and solve problems by evaluating various outcomes and selecting the best solution, similar to human decision-making.”
I’ve written before about how I think the word ‘zombie’ should be placed in front of all such verb uses. AI is not reasoning, on my view; AI can only do ‘zombie reasoning’! Neither is AI evaluating or selecting.
I think the word ‘zombie’ should be added here because I do not believe that AI has the kind of interiority that is necessary to doing these things. That is, I don’t believe that AI is reasoning or evaluating — or even selecting — in anywhere near the sense we ordinarily talk about such things. Rather, I believe that AI is simply producing outputs of the kinds that a human doing such things would produce.
Don’t get me wrong, I think it’s amazing that AI can produce such outputs! And that these outputs are getting better, in so many ways, extremely quickly.
But if you are into philosophy, then you will see that I’m making an implicit distinction here between conscious activity understood as a phenomenological matter, and conscious activity understood as a functional matter. You will also see that I am implicitly claiming that activities like reasoning and evaluating, and even selecting — on an ordinary use of such terms — can only be done by the kinds of things that have phenomenological consciousness.
You don’t need to know about all that meta stuff to get my point, however. Rather, you just need to read the following, which I wrote elsewhere on Substack a while back:
“to use a standard philosopher analogy, AI reasoning is ‘zombie reasoning’. That is, in the same way you can imagine a ‘zombie you’, who looks just like you, and goes about your life doing the things you do, but has no interiority, you can think of ‘zombie reasoning’ as reflecting the outputs, and even some of the processes, of the reasoning of a free agent — but also without recourse to interiority. It’s the same with AI ‘deciding’ and ‘thinking’ and ‘knowing’, and some of the less introspective concepts like ‘acting’. The reason it makes sense to put these words in inverted commas, that is, is because the AI versions of these activities are shadow versions of what we humans are capable of. Of course, it’s hard for us to conceive what it would be like to do these things on the shadow level, but AI has it much harder, because it cannot conceive at all.”
I should note at this point that there are, of course, important long-running philosophical debates about the extent to which non-human animals can do things like reasoning — and indeed, less mentally-complex activities like thinking, and even feeling.
But I simply don’t accept that most of the people who talk about AI doing any of these things — reasoning, selecting, thinking, feeling, whatever — truly believe that AI is doing them in any more than a ‘zombie’ way. Whereas I do accept that many people think that at least some non-human animals can do some of these things in a non-zombie way. I even count myself as one of those people, these days!
The deepest reason I believe that most people do not think that AI can do any of these things in a non-zombie way relates to my assumption that most people do not think that AI is alive. Sure, there are some crazy people out there who do believe that AI is alive. And sure, there are some even crazier people who believe that AI doesn’t have to be alive in order to do things like reasoning in non-zombie ways! Astonishingly, none of the people I’ve met who believe this second thing — including, most recently, a pretty serious philosopher — have been able to explain to me how on earth this could be. I’m still waiting!
But I also see this phenomenon — this dropping in of these self-awareness-requiring terms — in pieces of writing where the writer is also keen to stress that they are not one of the crazy people.
I saw something like that, for instance, in the highly enjoyable piece Scott Alexander published yesterday about Moltbook:
“We can debate forever - we may very well be debating forever - whether AI really means anything it says in any deep sense. But regardless of whether it’s meaningful, it’s fascinating, the work of a bizarre and beautiful new lifeform. I’m not making any claims about their consciousness or moral worth. Butterflies probably don’t have much consciousness or moral worth, but are bizarre and beautiful lifeforms nonetheless. Maybe Moltbook will help people who previously only encountered LinkedInslop see AIs from a new perspective.
And if not, at least it makes the Moltbots happy:”
What can it possibly mean for something to make the Moltbots happy, if the Moltbots are not the kind of thing that has internal awareness? Okay, I’m reading extensive implicit claims into what Alexander is saying here, to come to this conclusion about his position! And again, there are important long-running philosophical debates about whether, for instance, dogs can be happy in the ordinary sense of the term.
But how could you be happy without being alive? Does Alexander really mean that the Moltbots are alive, when he describes them as “lifeforms”? That they are living things, in the sense that we ordinarily understand the term ‘living’? And even if he does believe this astonishing thing (!), then how could the Moltbots be happy without having any of the interiority that only phenomenologically conscious kinds of living things can have?
In other words, it’s hard not to come away from the Alexander extract thinking that Alexander is saying something like: ‘Hey, even if the Moltbots have no inner life, Moltbook makes them happy!’. Or less strongly: ‘Hey, I don’t need to get into discussing the “consciousness or moral worth” of the Moltbots, or whether or not they are able to mean anything they write, or anything like that, to be able to conclude that something can make them happy’.1
This is bizarre!
But why does it matter? Why am I here getting so exercised about language use, when I could be reading Moltbook, or looking at paintings in a nearby gallery?
Well, one of the things that is particularly special about being human is that we have this capacity for self-awareness. Again, I’m not claiming that no non-human animals have this capacity — I don’t need to get into that here. Rather, I’m simply claiming that the capacity for self-awareness is a central, obvious, and incredibly significant feature of being human.
If you are reading this piece — really reading it, rather than somehow just having my words ‘inputted’ into your brain — then you are self-aware. And, a fortiori, this self-awareness is necessary for us humans to be able to do more mentally complex things like deliberating and reasoning.
What would it mean to reason if you were not self-aware? If an evil scientist took away your self-awareness for five minutes, and somehow during that period they manipulated your brain into receiving and ordering some signals, then would you count that as reasoning? I mean, would you count the thing that had been done to you as ‘you reasoning’, even if it had led to a neat little ‘reasoned’ set of conclusion statements, produced by your manipulated brain?
And what if we were to turn to an example that featured no mysterious evil scientist manipulating your brain in the kind of way that’s reserved for philosophical thought experiments? I mean, what if we considered an instance in which your brain seemingly ‘came to some conclusion’ without any kind of external manipulation or self-aware input? Think here of those ‘automatic’ behaviours you sometimes carry out — whether it’s your leg jumping when the doctor hits your knee with a tiny mallet, or your hands turning the steering wheel even though you’re not really focusing on the road. Would you count those behaviours as the outputs of reasoning, ordinarily understood?
Without any self-aware deliberative element, reasoning becomes a zombie matter. Again, this isn’t to degrade the value of zombie reasoning, including its interest to those of us obsessed by the many philosophical questions arising! Rather, it is to emphasise the special nature of reasoning as we ordinarily use the term.
The specialness of reasoning doesn’t simply track an astonishing feature of being human — this way in which we can reflect on things, and weigh them in our minds! It also has important moral implications. If you have reasoned on whether to act in some way, for instance, then this can have important moral implications for your ensuing action. If you reason about whether or not to kill me, and then you kill me acting on that reasoning, then it becomes possible to count you as guilty in a way that a non-reasoning thing could never be counted. You become, as we philosophers like to say, a candidate for blameworthiness.
Reasoning is one of our most significant capacities as humans. Our capacity for reasoning sets normative constraints on how we live together. It has crucial implications for how we should treat the other things around us. For how we should treat ourselves. For how we should treat AI! This provides just one reason why we should refrain from loosely using the term ‘reasoning’ in ways that imply that non-reasoning things can reason. And it applies to how we should speak of AI activity, much more generally.
Okay, anthropomorphising the ‘actions’ of non-human things is hardly anything new. People talk about cars in such ways. They talk about toys in such ways. They certainly talk about robots in such ways. But AI is much more slippery than any of these things. Even young children can grasp that the car does not choose to drive along the road. Even young children can grasp that cutting their toys with scissors is different from cutting their siblings. And when children cannot grasp such things, it is urgent that we teach them!
Talking about the ‘actions’ of AI in loose ways comes with serious epistemic risk, therefore. Doing so will deaden our awareness to truths of the revolutionary moment in which we live. It will leave us open to manipulation by people with an interest in covering up the ways in which AI is developing. It will lead us to miss out on the rich and exciting intellectual opportunities that the development of AI offers us all — including opportunities to refine our thinking about important matters such as consciousness.
Now, I’m not so pessimistic and paternalistic as to propose that we must all start using my 'zombie’ language in order to save humankind! And I’m not so rigid as to argue that the meanings of words can never change. But I firmly believe that giving up on the ordinary way in which we humans have used crucial terms like ‘reasoning’, for thousands of years, does not come cheap, on any level.
If we want, as a species, to discuss the role that AI is playing — and should play — in our lives, then we need to get much better at talking about it.
Thanks to GPT for the zombie reasoning picture!
Alexander’s implied conflation of whether a Moltbot can ‘mean’ something it says, and whether or not the things the Moltbots say are ‘meaningful’ (or whether or not the Moltbots themselves are meaningful) is also loose and unhelpful! Also, I won’t get into this here, but I’ve written several times previously about the tricky matter of AI individuation: I do not currently believe that AI is ever instantiated as an individuated thing, and this strengthens my belief that AI is not conscious.



The argument seems to be this:
1. People who think AI is alive are crazy
2. People who think AI can reason without being ‘alive’ are really crazy
3. If people think AI is [X] and are crazy, then X is false
4. Therefore, AIs cannot reason without being alive
5., Therefore, AI is not alive
6. In conclusion, AIs cannot reason.
Yeah, not convinced lol. Let me explain why.
1. You seem to have a special language of ‘real’ as opposed to ‘zombie’ doing things. You can read. Or you can really read. You can decide. Or you can ‘decide’. You can walk down the street to buy a cornetto like a zombie, or you can walk down the street not like a zombie. You define reasoning a (requiring) a a subject self-awareness, or phenomenological consciousness. But if 'real reasoning' requires private phenomonology, then the criteria for reasoning is not publically accessible. But ordinary languague (especially the kind of usage policing you're doing!) needs *public criteria*, and defensible ones at that.
You invoke the zombie language, but zombies are exactly what show you can't make phenomenology of subjective consciousness the criteria; because if zombies are conceivable, then, for all you know, everyone but you is a zombie. Yet, linguistically, we still (correctly) apply 'reasons' and 'decides' on functional/behavioural grounds to humans (and I imagine that figuring out whether AI has appropriate phenomnological character would be harder still). So given the zombified framing, the publicity-criteria of languages pushes you away from this conclusion.
2. You mention 'ordinary' language for reasonsing: but ordinary language for 'reasoning' is broader than you claim, and covers your cases. People use 'reasoning' to mean means-end planning, efficiency at achieving goals, rule-governed inference, sensitivity to reasons, etc. Your automatic knee jerk/driving examples don't support another conclusion either. They aim to show that without self-aware deliberation, it's not reasoning. But the examples don't isolate self-awareness as the decing factor! They mostly isolate a kind of inferenetial integration or goal-directed planning.
3. Your comaplints about 'lifeforms' seems misplaced and unconnected aside from rhetoric; and yet the argument at the top would imply that a good definitionfor life be provied and discussed. I’m not sure Alexander meant anything by ‘happy’’ except rhetorically in the last sentence of the blog as a kind of sign off; you criticize his use of ‘lifeforms’ and ‘life’, which you refrain from defining or discussing. ‘Life’ has a very strict biological definition; and machines don’t get anywhere close to reaching that. But I doubt that the ‘crazy’ people you have in mind are deemed crazy because they believe that the AIs respire and metabolize chemicals. Rather, I suspect they are deemed crazy because of another, more esoteric definition of life–one presumably more connected to ‘phenomenological consciousness’--that they fail to meet. But if so, this makes it hard for you to use 'life' anyway but circularly: AI can't reason unless it's conscious; it can't be conscious because it isn't 'alive' (but actually 'alive' just means conscious). A more charitable interpretation would be that you're suggesting that phenomenological consciousness can *only* be instantiated via the metabolism of chemicals. But if that’s supposed to be obvious from the sentence alone (because no argument is provided), I’m not seeing it.
4. You do a slight slide between a metaphyscial 'AI cannot reason because of vague self-aware reasons' and a pragramatic point: 'we shouldn't say it has reasons, because it confuses people'. But the arguments you use to justify the linguistic use are separate from the metaphysical claims you make: in theory one could concede the pragamatic worry while rejecting the metaphysical claim.
5. For example, you suggest the utility of differentiating reasoning as the kind of thing that grounds moral blame, and interpersonal/social conduct/constraints (and so conflation with another kind of reasoning means we lose importance of this). But I think this conflates reason too hard with intuitive conceptions of moral status and blame. Children may be responsive to reasons more than some other animals or even people--yet are less blameworthy. And they have less reasoning than most people but as much moral worth. So there isn't a strong a link as you try to make out. Moreover, I think that by drawing a hard line between self-aware/conscious reasonsing and a more instrumental, rule-governed one misses the *point* of the practical constructivist moral theorists: their point is we can construct morality from reason *because* it is objective, non-phenemonological, and this all comes from just accepting a instrumental, rule-governed reason--and nothing more. This foundation for practical-reason-based morality gets obscured under you view. (I would also, separately, reject the idea that blame is a moral property but alas!)
Moreover, you take this idea as saying that a specific account of reasoning sets normative constraints on how we live together. And hence has much more important implications worthwhile to keep the linguistic labels separate. But I would wager that even unselfaware reasoning (whatever self-aware means) would be giving the same normative constraints–because reason is universal. So why would reasoning unself-aware-ly give different normative constraints? There is no argument is given for why it would.
If you want a pragramtic linguistic divide then just say conscious or self-aware reasoning as opposed to inferential, instrumental, rule-governed reasonsing. That's fine. But reason is one of those things that has a very very broad history/scope in its label, and I don't think some uncomfortableness with AI should warrant every other reasoning that's not done by what Rebecca Lowe decides has phenomenalogical character to have 'zombie' put before it.
I recently heard somebody (Rebecca Newberger Goldstein?) suggest that "living things" resist increasing entropy. They take in low entropy energy, such as sunlight, and maintain or grow local pockets of order. If that is concept is at all useful, AI systems running in data centers are the opposite of living. They take low entropy energy, output a small low entropy response, and also output huge quantities of high-entropy heat.
Edit to Add: The Star Trek episode "What Are Little Girls Made Of" covers the topics and views in this column. Almost 60 years ago. Amazing that technology is getting close. Highly recommend a watch.