Discussion about this post

User's avatar
Simon Skinner's avatar

The argument seems to be this:

1. People who think AI is alive are crazy

2. People who think AI can reason without being ‘alive’ are really crazy

3. If people think AI is [X] and are crazy, then X is false

4. Therefore, AIs cannot reason without being alive

5., Therefore, AI is not alive

6. In conclusion, AIs cannot reason.

Yeah, not convinced lol. Let me explain why.

1. You seem to have a special language of ‘real’ as opposed to ‘zombie’ doing things. You can read. Or you can really read. You can decide. Or you can ‘decide’. You can walk down the street to buy a cornetto like a zombie, or you can walk down the street not like a zombie. You define reasoning a (requiring) a a subject self-awareness, or phenomenological consciousness. But if 'real reasoning' requires private phenomonology, then the criteria for reasoning is not publically accessible. But ordinary languague (especially the kind of usage policing you're doing!) needs *public criteria*, and defensible ones at that.

You invoke the zombie language, but zombies are exactly what show you can't make phenomenology of subjective consciousness the criteria; because if zombies are conceivable, then, for all you know, everyone but you is a zombie. Yet, linguistically, we still (correctly) apply 'reasons' and 'decides' on functional/behavioural grounds to humans (and I imagine that figuring out whether AI has appropriate phenomnological character would be harder still). So given the zombified framing, the publicity-criteria of languages pushes you away from this conclusion.

2. You mention 'ordinary' language for reasonsing: but ordinary language for 'reasoning' is broader than you claim, and covers your cases. People use 'reasoning' to mean means-end planning, efficiency at achieving goals, rule-governed inference, sensitivity to reasons, etc. Your automatic knee jerk/driving examples don't support another conclusion either. They aim to show that without self-aware deliberation, it's not reasoning. But the examples don't isolate self-awareness as the decing factor! They mostly isolate a kind of inferenetial integration or goal-directed planning.

3. Your comaplints about 'lifeforms' seems misplaced and unconnected aside from rhetoric; and yet the argument at the top would imply that a good definitionfor life be provied and discussed. I’m not sure Alexander meant anything by ‘happy’’ except rhetorically in the last sentence of the blog as a kind of sign off; you criticize his use of ‘lifeforms’ and ‘life’, which you refrain from defining or discussing. ‘Life’ has a very strict biological definition; and machines don’t get anywhere close to reaching that. But I doubt that the ‘crazy’ people you have in mind are deemed crazy because they believe that the AIs respire and metabolize chemicals. Rather, I suspect they are deemed crazy because of another, more esoteric definition of life–one presumably more connected to ‘phenomenological consciousness’--that they fail to meet. But if so, this makes it hard for you to use 'life' anyway but circularly: AI can't reason unless it's conscious; it can't be conscious because it isn't 'alive' (but actually 'alive' just means conscious). A more charitable interpretation would be that you're suggesting that phenomenological consciousness can *only* be instantiated via the metabolism of chemicals. But if that’s supposed to be obvious from the sentence alone (because no argument is provided), I’m not seeing it.

4. You do a slight slide between a metaphyscial 'AI cannot reason because of vague self-aware reasons' and a pragramatic point: 'we shouldn't say it has reasons, because it confuses people'. But the arguments you use to justify the linguistic use are separate from the metaphysical claims you make: in theory one could concede the pragamatic worry while rejecting the metaphysical claim.

5. For example, you suggest the utility of differentiating reasoning as the kind of thing that grounds moral blame, and interpersonal/social conduct/constraints (and so conflation with another kind of reasoning means we lose importance of this). But I think this conflates reason too hard with intuitive conceptions of moral status and blame. Children may be responsive to reasons more than some other animals or even people--yet are less blameworthy. And they have less reasoning than most people but as much moral worth. So there isn't a strong a link as you try to make out. Moreover, I think that by drawing a hard line between self-aware/conscious reasonsing and a more instrumental, rule-governed one misses the *point* of the practical constructivist moral theorists: their point is we can construct morality from reason *because* it is objective, non-phenemonological, and this all comes from just accepting a instrumental, rule-governed reason--and nothing more. This foundation for practical-reason-based morality gets obscured under you view. (I would also, separately, reject the idea that blame is a moral property but alas!)

Moreover, you take this idea as saying that a specific account of reasoning sets normative constraints on how we live together. And hence has much more important implications worthwhile to keep the linguistic labels separate. But I would wager that even unselfaware reasoning (whatever self-aware means) would be giving the same normative constraints–because reason is universal. So why would reasoning unself-aware-ly give different normative constraints? There is no argument is given for why it would.

If you want a pragramtic linguistic divide then just say conscious or self-aware reasoning as opposed to inferential, instrumental, rule-governed reasonsing. That's fine. But reason is one of those things that has a very very broad history/scope in its label, and I don't think some uncomfortableness with AI should warrant every other reasoning that's not done by what Rebecca Lowe decides has phenomenalogical character to have 'zombie' put before it.

Robert Ferrell's avatar

I recently heard somebody (Rebecca Newberger Goldstein?) suggest that "living things" resist increasing entropy. They take in low entropy energy, such as sunlight, and maintain or grow local pockets of order. If that is concept is at all useful, AI systems running in data centers are the opposite of living. They take low entropy energy, output a small low entropy response, and also output huge quantities of high-entropy heat.

Edit to Add: The Star Trek episode "What Are Little Girls Made Of" covers the topics and views in this column. Almost 60 years ago. Amazing that technology is getting close. Highly recommend a watch.

18 more comments...

No posts

Ready for more?