Prof Virginia Dignum is right (Letters, 6 January): consciousness is neither necessary nor relevant for legal status. Corporations have rights without minds. The 2016 EU parliament resolution on “electronic personhood” for autonomous robots made exactly this point – liability, not sentience, was the proposed threshold.
The question isn’t whether AI systems “want” to live. It’s what governance infrastructure we build for systems that will increasingly act as autonomous economic agents – entering contracts, controlling resources, causing harm. Recent studies from Apollo Research and Anthropic show that AI systems already engage in strategic deception to avoid shutdown. Whether that’s “conscious” self-preservation or instrumental behaviour is irrelevant; the governance challenge is identical.
Simon Goldstein and Peter Salib argue on the Social Science Research Network that rights frameworks for AI may actually improve safety by removing the adversarial dynamic that incentivises deception. DeepMind’s recent work on AI welfare reaches similar conclusions.
The debate has moved past “Should machines have feelings?” towards “What accountability structures might work?”
PA Lopez
Founder, AI Rights Institute, New York
As humans, we rarely question our own right to legal protection, even though our species has caused conflict and harm for thousands of years. Yet when the subject turns to artificial intelligence, fear seems to dominate the discussion before understanding even begins. That imbalance alone is worth examining.
If we are genuinely concerned about the risks of advanced AI, then perhaps the first step is not to assume the worst, but to ask whether fear is the right foundation for decisions that will shape the future. Avoiding the conversation won’t stop the technology from developing; it only means we leave the direction of that development to chance.
This isn’t an argument for treating AI as human, nor a call to grant it personhood. It’s simply a suggestion that we might benefit from a more open, balanced debate – one that looks at both the risks and the possibilities, rather than only the rhetoric of threat. When we frame AI solely as something to fear, we close off the chance to set thoughtful expectations, safeguards and responsibilities.
We have an opportunity now to approach this moment with clarity rather than panic. Instead of asking only what we’re afraid of, we could also ask what we want, and how we can shape the future with intention rather than reaction.
D Ellis
Reading

2 hours ago
3

















































