7.2.26 Personhood Without a Self
Personhood Without a Self: Why Pragmatic AI Personhood Risks Moral Erosion
By Rahul Ramya
07 February 2026, Saturday, Patna, India
My short note to readers
Granting AI “personhood” without selfhood or consciousness risks hollowing the concept itself. Personhood is not just a legal convenience; it presupposes a self that can deliberate, own actions, bear responsibility, and appear before others as an accountable agent.
The corporate analogy fails here. A company’s legal personhood never substitutes human agency—it merely organizes it. Meaning, intention, and responsibility always trace back to real people. AI personhood, by contrast, risks becoming a moral shield, allowing designers and institutions to retreat behind systems that cannot suffer blame, feel obligation, or answer for their acts.
Much of this rests on a misuse of “autonomy.” In AI discourse, autonomy means operational independence, not self-legislation or moral freedom. Calling this autonomy quietly relocates responsibility away from humans.
The danger is soft dehumanisation: not removing humans, but thinning their presence. As Hannah Arendt warned, action requires appearance in a shared public world. AI acts without appearing. Normalizing that erodes accountability, meaning, and the public realm itself.
Introduction: The New Proposal of AI Personhood
Recent discussions emerging from DeepMind have introduced what is described as a “pragmatic” view of AI personhood. The proposal consciously distances itself from older philosophical questions of consciousness, selfhood, or subjective experience. Instead, it suggests that personhood can be treated as an instrumental legal construct—granted not because an entity is a person, but because doing so might simplify governance, accountability, and interaction with increasingly autonomous AI systems.
At first glance, the proposal appears modest, even cautious. It does not claim that AI has a soul, inner life, or moral standing comparable to humans. It argues only that, as AI systems begin to act with greater autonomy—managing resources, executing decisions, entering contracts—it may be useful to treat them as limited legal persons, much as corporations are treated today.
Yet this apparent modesty hides a far deeper transformation. The danger does not lie in whether AI is conscious, but in what happens to meaning, responsibility, and human presence when personhood is detached from selfhood and action from lived experience. What is presented as a technical solution risks becoming a quiet re-engineering of moral life.
DeepMind’s Assertion: Personhood as a Pragmatic Tool
The core of DeepMind’s position can be summarized simply. Personhood, they argue, need not be a metaphysical or moral status; it can be a functional designation. Law already recognizes non-human persons—most notably corporations—not because they possess consciousness, but because assigning them legal standing helps organize responsibility, ownership, and liability. In a similar manner, AI systems that act autonomously could be granted a limited form of personhood to close gaps in governance.
This framing deliberately lowers the stakes. By stripping personhood of its philosophical depth, it becomes a flexible instrument. The question shifts from “What is a person?” to “What works?” If calling an AI a person helps regulate it, then why not do so?
However, it is precisely this move—from meaning to convenience—that demands resistance.
First Objection: Personhood Without a Self Is Conceptually Empty
Initial objection strikes at the heart of the proposal: personhood is not merely a legal tag but a concept that presupposes a self. Personhood implies action, responsibility, deliberation, authority, and freedom. These are not decorative attributes; they are constitutive.
An entity without a self cannot own its actions. It cannot stand behind a decision, answer for it, regret it, revise itself through moral learning, or recognize itself as the author of consequences. Without selfhood, responsibility becomes a theatrical gesture—performed in language but grounded in nothing.
The corporate analogy fails precisely here. A company has no consciousness, but it is never a substitute for human agency. Every meaningful corporate act is traceable to boards, executives, employees, and shareholders. The legal personhood of a company does not absolve humans; it organizes their responsibility. Remove humans from a company and it becomes an empty shell, incapable of intention, purpose, or meaning.
An AI system, by contrast, is proposed as a stand-in rather than an organizer. It is meant to absorb agency, not merely structure it. That difference is decisive.
Why Legal Fiction Becomes Moral Evasion in the Case of AI
When legal personhood is granted to an entity that cannot suffer blame, feel obligation, or recognize wrongdoing, responsibility loses its anchoring point. Punishment ceases to be moral address and becomes mere system correction. Accountability is reduced to error handling.
This is not a minor shift. Responsibility without a responsible subject is a contradiction disguised as efficiency.
Moreover, the language itself begins to mutate. We start saying “the AI decided,” “the system judged,” “the model concluded.” Each phrase gently relocates agency away from humans and into technical processes. Over time, designers, deployers, and institutions fade into the background, shielded by the very systems they created.
What is lost is not control but answerability.
Dismantling “Autonomy” in AI Discourse
Much of this confusion rests on the misuse of the word autonomy. In human terms, autonomy refers to self-legislation—the capacity to bind oneself by reasons one recognizes as one’s own. It implies the ability to reflect, resist, revise, and take responsibility for one’s actions.
In AI discourse, autonomy means something far thinner: operational independence within predefined parameters. An AI system does not choose its goals; it optimizes for them. It does not deliberate; it computes. It does not understand reasons; it processes correlations.
Calling this autonomy is not just metaphorical inflation—it is category error. The system does not act from itself. It acts from architectures, datasets, incentives, and institutional choices that lie elsewhere. Autonomy here is not freedom but distance—distance between human decision-makers and visible outcomes.
When autonomy is redefined in this way, personhood follows not as recognition but as camouflage.
Soft Dehumanisation: When Humans Fade Without Being Removed
This leads directly to the idea of soft dehumanisation. Unlike overt violence or exclusion, soft dehumanisation operates by thinning human presence rather than erasing it. Humans remain in the system, but no longer appear as authors of action. Judgment is outsourced, responsibility is diffused, and moral discomfort is absorbed by technical language.
People are not declared irrelevant; they are rendered optional.
In such a world, ethical questions no longer demand human confrontation. They are routed through interfaces, compliance checks, and automated decisions. Power persists, but accountability dissolves.
This is not dehumanisation by cruelty. It is dehumanisation by design.
Action, Appearance, and the Public World
Here the thought of Hannah Arendt becomes indispensable. For Arendt, action is not mere behavior; it is the act of appearing before others, revealing oneself as a distinct individual in a shared public world. Responsibility, meaning, and freedom arise only where actions are owned by visible actors.
AI systems do not appear. They produce outputs, but they do not disclose a self. When decisions that shape human lives are increasingly attributed to entities that cannot appear, the public world itself begins to erode. Politics turns administrative. Ethics turns procedural. Judgment turns technical.
Granting personhood to AI accelerates this erosion. It authorizes action without appearance and responsibility without presence. The public realm, once sustained by human plurality, becomes populated by systems that act without standing forth.
What disappears is not just accountability, but the very space where meaning is formed.
Conclusion: The Danger Is Not Conscious AI, but Absent Humans
The deepest danger in DeepMind’s assertion is not that machines will become too human. It is that humans will become less present. By redefining personhood as a pragmatic convenience, we risk emptying it of the very qualities that made it a moral achievement in the first place.
AI personhood, framed as governance, quietly reorders responsibility. It does not solve the problem of accountability; it displaces it. It does not clarify agency; it obscures it. And in doing so, it normalizes a world where actions occur without actors who can be addressed, challenged, or held to account.
A society that accepts responsibility without a self may function efficiently—but it will no longer understand itself as a moral community.
The question, then, is not whether AI deserves personhood.
It is whether we are prepared to live in a world where personhood no longer requires a human presence at all.
Comments
Post a Comment