From Freedom to Fitment

 

From Freedom to Fitment 

How Surveillance Science Colonises Human Thought in the Age of AI-First Governance

Rahul Ramya

28 December 2025


I. Capability as the Condition of Freedom

Capability is not productivity.
It is not efficiency.
It is not adaptability.

Capability is the condition of freedom.

It is the lived ability of a human being to understand one’s situation, deliberate internally, and act without having to constantly align oneself to an external score, signal, or metric. It is the capacity to appear before institutions as a reasoning subject—not as a data profile.

A society can be technologically advanced and still destroy capability.
A system can be comfortable and still enslave.

Capability is destroyed not only through hunger, poverty, or visible coercion. It is destroyed when thinking itself is reorganised—when people learn to adjust before they think, comply before they judge, and optimise before they choose.

Surveillance science must be understood in this register. It is not merely technological. It is civilisational.


II. The First Suppressed Choice: Understanding Does Not Necessarily Become Prediction

A common misunderstanding dominates discussions on AI and behavioural science: that once human behaviour is understood, prediction naturally follows.

This is false.

Between understanding and prediction stands the observer.

Researchers associated with social physics—most visibly —did not merely observe patterns. They made choices: whether understanding should be converted into prediction; whose behaviour was worth predicting; which futures were to be prioritised; whose interests prediction would serve.

Understanding can remain descriptive. Anthropologists, historians, and sociologists have long understood human behaviour without claiming the right to forecast or intervene.

Prediction, by contrast, is always directional. It points toward a future someone wants.

The moment the question shifts from “Why do people behave this way?” to “What will they do next?”, a political decision has already been taken.

This is the first fracture of capability.


III. The Second Suppressed Choice: Prediction Does Not Necessarily Demand Intervention

A second illusion follows immediately: that prediction inevitably requires intervention.

Again, this is false.

Between prediction and intervention stands power.

Researchers, corporations, bureaucracies, and governments decide whether behaviour should be modified; what counts as acceptable conduct; which deviations are risky; whether consent matters.

Technology does not intervene.
People intervene using technology.

Credit models may predict default, but banks decide whether prediction justifies denial. Attendance systems detect absence, but administrations decide whether absence becomes punishment. Behavioural analytics flag “low engagement,” but managers decide whether that becomes discipline.

The chain is not technical.
It is epistemic power disguised as automation.


IV. Power, Not Technology: How Old Hierarchies Hide Inside New Systems

The causal chain from understanding to prediction to intervention is not technological. It is political.

Decisions are made by political elites, bureaucratic authorities, corporate management, and knowledge-owning classes. Technology supplies legitimacy and cover.

What once operated openly through race, caste, class, or colonial rule now speaks the language of data, efficiency, objectivity, and optimisation.

In India, Aadhaar-linked welfare systems have repeatedly excluded beneficiaries due to biometric failures. The system does not decide exclusion; authorities choose to treat authentication failure as ineligibility rather than as a trigger for human verification. Technology becomes the alibi for hierarchy.

In the United States, predictive policing programs—earlier in Chicago, and more recently through platforms such as Palantir’s Gotham used by police departments in cities like New York—have formalised existing racial and neighbourhood biases while insulating decision-makers from scrutiny. The hierarchy persists; only the vocabulary changes.


V. From Thinking Person to Performing Person

At this stage, a profound anthropological shift occurs.

The thinking person becomes a performing person.

Technology is not merely used to observe behaviour. It is used to train people to act without agency, react without empathy, and move without freedom.

In Amazon warehouses—and increasingly in Walmart’s parallel logistics systems—algorithmic management tracks “time off task” down to seconds. Workers learn to move, pause, and even think in ways legible to machines. What is produced is not intelligence but compliance.

In welfare systems, beneficiaries learn to follow procedures blindly and accept exclusion as technical error. Life becomes a continuous audition before systems.

This is not empowerment.
It is behavioural conditioning.


VI. When Freedom Collapses into Survival

Here freedom quietly gives way to survival.

People stop asking, “Is this just?”
They begin asking, “Is this safe for me?”

In the gig economy, workers accept constant tracking to avoid deactivation. In bureaucracies, officials prioritise compliance over judgment. In education, students chase rankings rather than understanding.

Defenders argue that such systems improve efficiency, prevent fraud, and reduce bias. These claims deserve acknowledgment.

But fraud prevention becomes blanket suspicion. Efficiency is defined as throughput, not human well-being. Bias is not removed; it is encoded and rendered unchallengeable.

Humans are trained to serve systems they cannot influence, while the right to intervene in those systems is reserved for a microscopic elite.

Freedom is not abolished.
It is postponed.


VII. Pentland Through Zuboff’s Lens: Surveillance as Authority Over the Future

The significance of Pentland’s work cannot be understood without the framework developed by .

Zuboff showed that surveillance capitalism does not merely collect data; it claims authority over the future.

Pentland’s workplace analytics follow this logic: employees are observed continuously; patterns are extracted; futures are predicted; environments are redesigned to steer behaviour.

The celebrated Bank of America case—where synchronised coffee breaks raised productivity—demonstrated that management could redesign social life itself on the basis of data. Prediction became authority. Authority became design. Design replaced consent.

This aligns with Foucault’s account of biopolitics: power that governs not by forbidding, but by normalising, measuring, and optimising life.


VIII. From Capability Destruction to Cognitive Colonisation

As this process deepens, capability destruction becomes cognitive colonisation.

People begin to distrust their own judgment, rely on scores and prompts, and pre-emptively adjust behaviour. External logic replaces internal reasoning. Fitment replaces freedom. Adjustment replaces choice.

Freedom reduced to fitment is not freedom.

Fitment is slavery—structural, not metaphorical.

This is what described as thoughtlessness: the abdication of judgment under administrative systems that reward compliance over reflection.


IX. Knowledge Asymmetry: Why Enslavement Becomes Stable

This regime rests on a profound asymmetry of knowledge.

A small elite controls datasets, models, and interpretive authority. The majority receives outcomes, scores, nudges, and verdicts.

A loan applicant sees rejection, not the model. A welfare recipient sees exclusion, not the logic. A worker sees appraisal, not the metric.

In the Global South, this asymmetry is harsher. Appeals mechanisms are weak, transparency thin, and legal literacy uneven. When AI-driven decisions exclude someone from welfare or credit, the harm is existential. Survival itself becomes conditional on unreadable systems.

People are not denied information.
They are denied epistemic participation.

Meritocracy as Epistemological Sorting


Meritocracy plays a crucial—often misunderstood—role in stabilising this asymmetry of knowledge.

In contemporary neoliberal political economy, meritocracy is not designed to test merit under equal conditions. It is designed to preserve epistemological hierarchy while appearing fair.

Algorithms are central to this process.

They do not merely rank outcomes; they embed prior hierarchies—of class, caste, race, geography, language, and access—into scoring systems that appear neutral to public gaze. Once embedded, these hierarchies become invisible. Decisions look technical, not political. Inequality looks earned, not inherited.

Meritocracy thus becomes a system of sorting, not of justice.

Those already advantaged:

have better data footprints

possess institutional familiarity

understand system expectations

can afford optimisation

Those disadvantaged are told they failed on merit—without ever being given a common starting point.

The crucial test of meritocracy—same starting conditions for all—is deliberately excluded. If such a test were applied in real time, the fiction would collapse. Structural inequality would become undeniable.

Algorithms therefore do not correct hierarchy.

They freeze it, sanitise it, and scale it.

Meritocracy, in this form, does not reward excellence.

It rewards fitment to systems designed by elites.

This is why meritocracy functions as an epistemological shield:

it protects elites from moral scrutiny

it converts privilege into performance

it delegitimises dissent as incompetence

In this sense, meritocracy is not opposed to surveillance.

It is one of its most effective justifications.

A loan applicant sees rejection, not the model. A welfare recipient sees exclusion, not the logic. A worker sees appraisal, not the metric.

In the Global South, this asymmetry is harsher. Appeals mechanisms are weak, transparency thin, and legal literacy uneven. When AI-driven decisions exclude someone from welfare or credit, the harm is existential. Survival itself becomes conditional on unreadable systems.

People are not denied information.

They are denied epistemic participation.


X. AI-First Governance: When Policy Becomes Population Management

AI-first governance institutionalises this asymmetry.

Dashboards replace files. Metrics replace discretion. Officials obey systems rather than interpret law.

In welfare, hunger becomes a biometric mismatch. In politics, voters are segmented and nudged. Persuasion gives way to behavioural steering.

Policy ceases to be moral reasoning.
It becomes population management.

Delegation to systems shields authority from responsibility. When harm occurs, it is blamed on “the system,” never on the choice to deploy it.


XI. Fitment as Political Strategy

Politics benefits directly from fitment.

A fitted population reacts rather than reflects, complies rather than contests, adapts rather than resists. Thoughtlessness becomes productive.

This intensifies ’s warning against treating humans as mere means, and hollows out ’s conception of capability. A population deprived of judgment becomes easier to manage; power becomes durable.

Caste, Race, and Inheritance — How Algorithmic Systems Reproduce Civilisational Hierarchies

The discussion so far reveals that surveillance, prediction, and fitment do not operate in a social vacuum. They attach themselves to pre-existing hierarchies—and in doing so, give them a new technological lease of life. Among these hierarchies, caste, race, and inheritance occupy a central place.

In societies structured by caste, such as India, inequality has never been merely economic. It is epistemic and moral. Caste historically determined who could learn, who could speak, who could be believed, and whose knowledge counted. Algorithmic systems do not dismantle this order; they translate it into data form. Educational scores, credit histories, employment gaps, language proficiency, and residential location all act as proxies for caste location, even when caste is never explicitly named. What appears as neutral “risk assessment” often reproduces inherited disadvantage with mathematical precision.

Race plays a parallel role in other contexts. In the United States and parts of Europe, predictive systems in policing, credit, housing, and employment absorb racialised histories into their training data. Neighbourhood becomes a proxy for race; policing history becomes a proxy for suspicion; income volatility becomes a proxy for moral failure. Algorithms do not invent racial hierarchy, but they launder it through computation, making it harder to contest. When discrimination arrives as a score rather than a slur, resistance is weakened.

Inheritance completes this triad. Algorithmic meritocracy assumes that individuals compete on a neutral field, yet inheritance silently structures the field itself. Access to quality schooling, stable housing, digital literacy, professional networks, and even “clean” data trails is inherited long before any algorithm begins its evaluation. Those born into advantage generate data that reads as reliability and competence; those born into precarity generate data that reads as risk. Inheritance thus becomes destiny—rendered legitimate by code.

This is why algorithmic systems systematically avoid the most dangerous question: who started where?

A real-time correction for caste, race, and inherited disadvantage would expose the moral bankruptcy of meritocratic sorting. It would reveal that outcomes reflect structure, not effort. Instead, these systems freeze history at the moment of evaluation and treat inequality as individual failure.

Thinkers such as B. R. Ambedkar warned that political equality without social and epistemic equality would only reproduce domination in new forms. Algorithmic governance realises this warning with unprecedented efficiency. It preserves formal equality—one algorithm for all—while deepening substantive inequality through inherited asymmetries.

Caste, race, and inheritance thus do not merely survive AI-first governance. They thrive within it. Surveillance makes them legible, prediction makes them actionable, and meritocracy makes them morally acceptable.

What emerges is not a post-prejudice society, but a post-accountability hierarchy—one in which ancient forms of domination persist, shielded by the language of technology and the authority of data.


XII. Conclusion: Elite Choice, Permanent Hierarchy, and the Architecture of Enslavement

Capability destruction is not an accident of AI-first governance.
It is not technological determinism.

Technology does not demand surveillance. Algorithms do not insist on domination.

What enables surveillance is elite choice.

Political, bureaucratic, corporate, and knowledge elites deploy technology to preserve hierarchy. Elites remain elites through monopoly over knowledge and interpretation. Common people may earn more or live comfortably, but they are denied epistemic power.

Surveillance performs two functions at once. It makes elites permanent elites. And it produces layered inequalities among common people—rewarding those who adapt and comply, punishing those who fall behind or resist.

Neoliberal political economy sharpens this sorting. Efficiency rises. Mobility freezes. Freedom shrinks.

A society organised around prediction rather than judgment trains people to mistake obedience for safety, fitment for freedom, and survival for dignity.

It does not need chains.
It does not need terror.

It only needs systems so pervasive that people forget how it feels to think without permission.

That is not progress.
That is enslavement with dashboards.


Endnotes

  1. Aadhaar biometric exclusions and welfare denial: UIDAI annual reports; Supreme Court of India proceedings on KYC and exclusion errors (2024–2025).

  2. Predictive policing bias: Electronic Frontier Foundation and EPIC analyses of Chicago Strategic Subject Lists; New York City use of Palantir Gotham and ongoing EU AI Act (2025) scrutiny of predictive policing.

  3. Algorithmic labour management: Investigative reporting on Amazon warehouses (US/EU) and Walmart logistics surveillance, 2024–2025.

  4. Surveillance capitalism framework: Shoshana Zuboff, The Age of Surveillance Capitalism.

  5. Biopolitics and governance: Michel Foucault, Society Must Be Defended.

  6. Thoughtlessness and administration: Hannah Arendt, Eichmann in Jerusalem.

  7. Capability and freedom: Amartya Sen, Development as Freedom.


Comments