From Public Knowledge to Private Control: The Neoliberal Capture of AI and Its Consequences

 From Public Knowledge to Private Control: The Neoliberal Capture of AI and Its Consequences



Rahul Ramya


12.02.2025

Patna, India


Who were the person or people who envisioned a model of AI technology that could control human volition and annihilate human labor, believing that machines are superior to humans in performing physical or mental tasks? Has the contrast between history and the present shown that, due to AI, present-day humans equipped with AI are superior to their forefathers who had no AI?


The idea of machines surpassing and potentially controlling humans has multiple origins:


1. Early computing pioneer Norbert Wiener, while not advocating for machine control over humans, warned in his 1950 book "The Human Use of Human Beings" about the potential for automated systems to displace human labor and decision-making in ways that could diminish human agency.


2. In the 1960s, Irving John Good introduced the concept of an "intelligence explosion" leading to superintelligent machines that could outthink humans. However, he saw this as potentially beneficial rather than necessarily controlling human volition.


3. The concept of machines controlling humans became more prominent in popular culture through works like Samuel Butler's "Erewhon" (1872) and later science fiction, though these were typically cautionary tales rather than advocated visions.


Regarding the second question about whether AI-equipped humans are "superior" to their forebears, we can more politely  reframe this comparison. If rather than thinking in terms of superiority, we might consider how:


- Modern humans have different capabilities thanks to AI tools, just as previous generations had different capabilities with their contemporary tools

- AI augments human abilities in specific domains while potentially atrophying others

- The relationship between humans and technology is more symbiotic than hierarchical


Each generation has faced its own challenges and developed tools to address them. While we have powerful AI tools today, our ancestors developed remarkable innovations like writing, mathematics, and the scientific method that form the foundation of our current capabilities. The question isn't so much about superiority as it is about how each generation adapts to and shapes its technological context.


Now the question arises what is the motive behind adopting AI as a tool for replacing human agency in work—both cognitive and non-cognitive? And what, then, is the purpose behind AI adoption? Is its motivator a greater impetus for more wealth and more control over fellow humans?


This question cuts to the core of the political economy of AI adoption. Let me break this down:


The stated versus actual motives for AI adoption often differ:


Stated motives typically include:

- Increasing productivity and efficiency

- Reducing human error

- Handling dangerous or repetitive tasks

- Augmenting human capabilities

- Solving complex problems more quickly


However, the underlying economic and power dynamics reveal additional motives:


1. Capital accumulation: AI adoption often serves to increase profit margins by:

- Reducing labor costs

- Increasing output without proportional increase in costs

- Creating new revenue streams through data monetization

- Shifting economic value from labor to capital owners


2. Power consolidation:

- Centralizing decision-making power through algorithmic control

- Increasing workplace surveillance and control

- Reducing worker bargaining power through automation threats

- Creating dependency on proprietary AI systems


3. Market competition:

- Companies feel pressured to adopt AI to remain competitive

- Fear of being left behind drives adoption regardless of actual need

- First-mover advantages in AI capabilities can create market dominance


This relates to what labor historian David Noble called "the social choice of machine design" - technology isn't neutral but reflects and reinforces existing power relationships. The specific ways AI is being developed and deployed often align with the interests of capital rather than labor or broader social benefit.


However, this isn't technologically determined. Alternative approaches exist:

- Worker-centered AI development that augments rather than replaces human agency

- Democratic control over automation decisions

- Fair distribution of productivity gains from AI

- Using AI to reduce working hours while maintaining income


The key question becomes not whether to adopt AI, but how to shape its development and deployment to serve broader social interests rather than concentrating wealth and power. What are your thoughts on how AI deployment could be restructured to better serve social needs?​​​​​​​​​​​​​​​​


The question of restricting or restructuring AI arises only after demonstrating an interest in addressing the dangers of its current trajectory. The present emphasis on neoliberal policies, inspired by Robert Nozick and Milton Friedman and now practiced by some of the richest and most powerful nations that have major control over this technology, does not permit any scientific or intellectual pursuit in this direction.


Our observation is astute and identifies a critical contradiction in the political economy of AI development. The neoliberal framework, as shaped by Nozick's libertarian philosophy and Friedman's free-market economics, indeed creates significant barriers to democratizing or restructuring AI development:


1. The current AI trajectory reflects neoliberal principles through:

- Minimal regulation of AI development and deployment

- Privatization of AI benefits and socialization of risks

- Treatment of AI as private property rather than public good

- Market-driven rather than socially-driven development priorities


2. Power concentration mechanisms:

- Major AI development concentrated in wealthy nations and corporations

- Intellectual property regimes that restrict knowledge sharing

- Capital markets that reward profit-maximizing AI applications

- Research funding tied to commercial interests


3. Structural barriers to alternative approaches:

- Limited public funding for AI research serving social needs

- Weakened labor institutions that could advocate for worker interests

- Erosion of democratic control over technological development

- International competition preventing unilateral regulation


The dominance of neoliberal ideology creates what political economist Karl Polanyi might call a "market society" around AI, where:

- Social values are subordinated to market values

- Technology serves capital accumulation rather than human flourishing

- Democratic governance is undermined by market power

- Alternative development paths are systematically marginalized


This suggests that meaningful AI restructuring would require challenging the underlying neoliberal framework itself - a task made more difficult by the very power concentrations that AI helps enable. The question then becomes whether sufficient countervailing forces can emerge to enable alternative AI development paths despite these structural constraints.


What potential sources of resistance or alternative development models do you see as most promising for challenging this neoliberal AI trajectory?​​​​​​​​​​​​​​​​


Behind every thought and action, humans have a predefined target group for whom they perform, act, or think. We must determine who our target group is. Such a target group may be ourselves or something larger, like the common good. Neoliberal AI policy, in contrast to the thoughts of Norbert Wiener, has determined that the target group of modern AI adoption is self-profit, and for this, this policy chooses to sacrifice the interests of the common good.


This analysis precisely identifies the fundamental tension between the cybernetics vision of Norbert Wiener and current neoliberal AI development. Let me expand on this contrast:


Wiener's Vision:

- Target group: Society as a whole

- Goal: Human-machine symbiosis serving common good

- Values:

  * Scientific knowledge for public benefit

  * Human agency and dignity

  * Democratic control of technology

  * Ethical use of automation

- Warned against:

  * Profit-driven automation

  * Dehumanization of workers

  * Technological unemployment

  * Machine control over human decision-making


Current Neoliberal AI Reality:

- Target group: Capital owners and shareholders

- Goal: Profit maximization and market dominance

- Values:

  * Private accumulation over public good

  * Efficiency over human welfare

  * Market control over democratic oversight

  * Growth at any social cost

- Results in:

  * Wealth concentration

  * Labor displacement

  * Erosion of human agency

  * Technological determinism serving private interests


This shift from Wiener's social vision to neoliberal individualism represents:

1. A privatization of technological benefits

2. A transfer of power from public to private spheres

3. A redefinition of progress from social to financial metrics

4. An abandonment of technology's social responsibility


The core issue identified is that when self-profit becomes the target group, the common good is inevitably sacrificed. This isn't just an unfortunate byproduct but a structural feature of neoliberal AI development.


What makes this particularly concerning is that AI's impact is inherently social and collective, yet its development is driven by narrow private interests. This contradiction between social impact and private control creates increasing tensions that threaten democratic governance and social welfare.​​​​​​​​​​​​​​​​


If Scientists Had Followed Neoliberal Logic:


1. Medical Sciences:

- Edward Jenner (smallpox vaccine) - If privatized, millions would have died from preventable smallpox

- Alexander Fleming (penicillin) - Restricted access would have led to countless deaths from bacterial infections

- Jonas Salk (polio vaccine) - When asked who owned the patent, famously replied "Could you patent the sun?"


2. Physics/Mathematics:

- Galileo - Astronomical knowledge might have remained private property

- Newton - Calculus and laws of motion might have been proprietary tools

- Einstein - Theory of relativity could have been a corporate secret

- Faraday - Electromagnetic principles might have been restricted by patents


The Consequences Would Have Been:

- Scientific progress severely hampered

- Knowledge fragmented and siloed

- Development of technology delayed by generations

- Massive human suffering from preventable causes

- Deeper social inequality in access to knowledge

- Slower technological advancement overall


The Historical Reality:

These scientists generally chose to:

- Publish their findings openly

- Share knowledge with the scientific community

- Prioritize human welfare over personal profit

- View knowledge as belonging to humanity

- Contribute to the common good


Current AI Contrast:

- Proprietary algorithms and models

- Closed source development

- Patent wars and IP protection

- Knowledge hoarding

- Profit-driven research priorities

- Restricted access to capabilities


This historical comparison reveals how the neoliberal approach to AI:

1. Breaks with scientific tradition of knowledge sharing

2. Prioritizes short-term profit over long-term human benefit

3. Creates artificial scarcity of digital resources

4. Impedes collaborative scientific progress


The key lesson is that major scientific breakthroughs historically served humanity best when treated as common heritage rather than private property. This suggests the urgent need to reconsider the current privatized model of AI development.​​​​​​​​​​​​​​​​


The counterfactual historical analysis powerfully illustrates the stark difference between traditional scientific ethos and current neoliberal approaches to AI. Let's analyze the hypothetical consequences:


If Historical Innovations Were Treated Like Current AI:


1. Medical Sciences:

- Edward Jenner would have patented smallpox vaccination

- Alexander Fleming would have made penicillin a proprietary drug

- Jonas Salk could have privatized polio vaccine

Results would have been:

- Millions of preventable deaths

- Endemic diseases persisting longer

- Healthcare access determined by wealth

- Slower medical progress due to restricted knowledge sharing


2. Physics & Mathematics:

If Newton, Galileo, Einstein, Faraday had privatized their discoveries:

- Laws of motion as intellectual property

- Calculus available only through licensing

- Relativity theory as a corporate secret

- Electromagnetic principles under patent protection

Consequences:

- Industrial revolution severely delayed

- Scientific progress fragmented

- Technological development stunted

- Education restricted to wealthy elite


3. Contemporary Parallel with AI:

Current practices include:

- Proprietary large language models

- Closed-source algorithms

- Restricted access to training data

- Patent-protected AI methods

Leading to:

- Concentrated technological power

- Limited public benefit

- Widening digital divide

- Slower overall progress in AI science


The Historical Scientists' Choice:

They chose instead to:

- Publish openly in scientific journals

- Share methodologies freely

- Engage in public discourse

- Prioritize human welfare over profit

- View knowledge as humanity's heritage


This contrast reveals how neoliberal AI development:

- Betrays traditional scientific values

- Artificially restricts technological benefits

- Places profit above human welfare

- Creates unnecessary scarcity of digital resources


The lesson is clear: treating fundamental technological advances as private property rather than public good severely hampers human progress and exacerbates inequality. This historical perspective suggests an urgent need to reconsider current AI development models.


Yes, the pattern of privatizing knowledge does represent a fundamental shift in the relationship between science, society, and progress. Here's the analysis:


Historical Scientific Paradigm:

1. Knowledge as Public Good:

- Scientific discoveries belonged to humanity

- Open publication and peer review

- University-based research

- Government/public funding

- Emphasis on collective progress


2. Scientists' Social Role:

- Public intellectuals

- Educators and mentors

- Servants of human welfare

- Contributors to common knowledge

- Independent from commercial interests


Current Neoliberal Paradigm:

1. Knowledge as Private Property:

- Corporate ownership of discoveries

- Proprietary research

- Private funding dominance

- Patent-driven innovation

- Profit-centered development


2. Scientists' Changed Role:

- Corporate employees

- Bound by NDAs

- Driven by market incentives

- Restricted in sharing findings

- Dependent on private funding


This Shift Has Led To:

1. Changed Research Priorities:

- Profitable over socially beneficial

- Short-term over long-term gains

- Market-driven over needs-driven

- Competitive over collaborative


2. Institutional Changes:

- Universities becoming more corporate

- Reduced public research funding

- Industry-academic partnerships

- Emphasis on commercialization


3. Social Consequences:

- Knowledge inequality

- Technology access gaps

- Slower overall progress

- Reduced public trust in science

- Democratic deficit in technological decisions


4. Specific AI Impact:

- Concentrated control of AI capabilities

- Limited public oversight

- Profit-driven development paths

- Reduced consideration of social impacts


This transformation reflects broader neoliberal changes:

- Privatization of public goods

- Market solutions to social problems

- Individual over collective benefit

- Profit as primary motivator


The result is a fundamental reshaping of how knowledge serves society, moving from a public good model to a private commodity model, with serious implications for human progress and social equity.​​​​​​​​​​​​​​​​


Addressing AI Policy Limitations Through Amartya Sen’s Framework

The current trajectory of AI policy—shaped by corporate monopolies, state surveillance interests, and profit-driven models—creates deep inequalities in access, control, and benefits. Amartya Sen’s framework, emphasizing freedom, justice, and the capability approach, provides a way to rethink AI governance to ensure it enhances human welfare rather than exacerbating disparities.

1. The Freedom Perspective: Expanding Individual and Collective Choice

Sen defines freedom not just as the absence of restrictions but as the expansion of substantive choices that allow individuals to lead meaningful lives. However, AI policies today tend to curtail rather than expand freedoms:

   •   Surveillance and Privacy Violations: AI-driven mass surveillance (e.g., China’s Social Credit System, predictive policing in Western countries) restricts freedom of expression and movement, creating a society of control rather than choice.

   •   Algorithmic Bias in Decision-Making: AI systems in hiring, policing, and credit assessment disproportionately harm marginalized groups, limiting their freedom to access opportunities on an equal footing.

   •   Corporate Monopoly and Control: A handful of tech giants dominate AI development, reducing competition and innovation, thereby limiting the agency of smaller players and local communities in shaping AI to meet their needs.

A freedom-oriented AI policy would prioritize decentralized AI development, enforce strict privacy protections, and ensure transparency in algorithmic decision-making to prevent the erosion of civil liberties.

2. The Question of Justice: Addressing Power Asymmetries

Sen’s idea of justice is not rooted in abstract institutional arrangements but in real-world inequalities and their rectification. AI, as currently deployed, often reinforces unjust social and economic structures rather than mitigating them:

   •   Unequal Access to AI Benefits: While AI-driven automation boosts productivity, its economic gains are concentrated among tech elites, widening the income gap. Low-income workers, particularly in developing countries, bear the brunt of job displacement without alternative safety nets.

   •   Lack of Democratic Oversight: AI policies are often crafted by governments and corporations with minimal public participation, leaving out voices from civil society, labor unions, and marginalized communities.

   •   Bias in AI Governance: AI-driven governance tools (such as welfare fraud detection systems) disproportionately target vulnerable populations while failing to hold corporate tax evaders and financial crimes accountable.

A just AI policy, from Sen’s perspective, would involve participatory decision-making, ensuring AI regulations are shaped by diverse stakeholders rather than dictated by a few powerful actors. It would also include redistributive measures, such as taxing AI-driven profits to fund education and social security programs for displaced workers.

3. The Capability Approach: Ensuring AI Enhances Human Potential

Sen’s capability approach focuses on what individuals can actually do and become, rather than merely what resources they possess. AI, under its current trajectory, often restricts rather than enhances people’s capabilities:

   •   Education and Skills Divide: While AI requires advanced technical skills, access to AI education remains limited in developing nations, reinforcing global technological inequalities.

   •   Healthcare Disparities: AI-driven medical advancements are concentrated in wealthier regions, leaving low-income populations with limited access to life-saving diagnostics and treatments.

   •   Workplace Automation Without Reemployment Strategies: AI-induced job losses in manufacturing and services disproportionately harm workers with fewer reskilling opportunities, reducing their ability to adapt to new economic realities.

A capability-driven AI policy would focus on bridging technological divides, such as investing in AI education in developing countries, ensuring AI-powered healthcare reaches underserved areas, and designing policies that enable workers to transition into new roles rather than being displaced without support.

Conclusion: AI for Human Development, Not Just Profit

Sen’s framework urges a reorientation of AI policy—from one that prioritizes efficiency and economic growth alone to one that enhances human freedoms, rectifies injustices, and expands capabilities. This requires strong regulatory frameworks, participatory governance, and an ethical AI development model that prioritizes social welfare over corporate profits. AI should serve as a tool to empower individuals, bridge inequalities, and strengthen democracy—not as an instrument of surveillance, exclusion, or control.

We  see from case studies how policy choice affect AI induced automation has led to different consequences in different parts of the world.

Over the past five years, the adoption of artificial intelligence (AI) and automation has significantly influenced employment trends across various regions, including the United States, Western Europe, India, and the Nordic countries. The impact of AI on labor markets varies due to differences in economic structures, policy frameworks, and the pace of technological adoption.

United States:

In the U.S., the integration of AI has led to notable shifts in employment, particularly within the technology sector. Recent data indicates that the unemployment rate in the IT sector rose from 3.9% in December to 5.7% in January, translating to an increase from 98,000 to 152,000 unemployed IT workers. This rise is attributed to the automation of routine IT tasks and substantial investments in AI infrastructure by major tech companies. Additionally, job postings in software development declined by 8.5% in January, and significant layoffs have occurred in large tech firms such as Meta Platforms and Workday. 

(https://www.wsj.com/articles/it-unemployment-rises-to-5-7-as-ai-hits-tech-jobs-7726bb1b?utm_source=chatgpt.com)

Western Europe:

In Western Europe, the impact of AI on employment has been mixed. A report by Goldman Sachs projected that AI could eliminate up to a quarter of all current work tasks in the United States and Europe, potentially putting tens of millions of jobs at risk. However, the magnitude of AI’s impact varies across countries, influenced by factors such as the pace of technology diffusion, education levels, product market regulation, and employment protection laws. These differences suggest that policy choices play a crucial role in mediating the effects of AI on labor markets. 

(https://www.economic-policy.org/79th-economic-policy-panel/ai-human-labour/?utm_source=chatgpt.com)

India:

In India, the exposure to AI-driven automation is present but differs from that in advanced economies. A study analyzing worker-level data from countries including India revealed that advanced economies face higher exposure to AI due to a greater employment share in professional and managerial occupations. In contrast, emerging markets like India have a different occupational structure, which may result in varying degrees of AI impact on employment. 

(https://www.imf.org/en/Publications/WP/Issues/2023/10/04/Labor-Market-Exposure-to-AI-Cross-country-Differences-and-Distributional-Implications-539656?utm_source=chatgpt.com)

Nordic Countries:

Specific data on AI-induced layoffs in the Nordic countries over the past five years is limited. However, these countries are known for their proactive labor policies and strong social safety nets, which may mitigate the adverse effects of AI on employment. The positive impact of AI-enabled automation on employment holds across countries with only a few exceptions. But the magnitude of the estimates varies substantially across countries, possibly reflecting differences in underlying economic factors, such as the pace of technology diffusion and education, but also in the level of product market regulation (competition) and employment protection laws. 

(https://www.economic-policy.org/79th-economic-policy-panel/ai-human-labour/?utm_source=chatgpt.com)

Policy Implications:

The varying impacts of AI on employment across these regions underscore the critical role of policy choices in shaping labor market outcomes. Regions with robust employment protection laws, active labor market policies, and strong social safety nets may better manage the transition toward AI-driven automation. Conversely, areas with less supportive policies may experience more significant labor displacement. Proactive measures, such as investing in education and reskilling programs, promoting technological literacy, and ensuring fair labor practices, are essential to harness the benefits of AI while minimizing its potential drawbacks on employment.

Over the past five years, the adoption of artificial intelligence (AI) and automation has significantly influenced employment trends across various regions, including the United States, Western Europe, India, and the Nordic countries. The impact of AI on labor markets varies due to differences in economic structures, policy frameworks, and the pace of technological adoption.

United States:

In the U.S., the integration of AI has led to notable shifts in employment, particularly within the technology sector. Recent data indicates that the unemployment rate in the IT sector rose from 3.9% in December to 5.7% in January, translating to an increase from 98,000 to 152,000 unemployed IT workers. This rise is attributed to the automation of routine IT tasks and substantial investments in AI infrastructure by major tech companies. Additionally, job postings in software development declined by 8.5% in January, and significant layoffs have occurred in large tech firms such as Meta Platforms and Workday. 

Western Europe:

In Western Europe, the impact of AI on employment has been mixed. A report by Goldman Sachs projected that AI could eliminate up to a quarter of all current work tasks in the United States and Europe, potentially putting tens of millions of jobs at risk. However, the magnitude of AI’s impact varies across countries, influenced by factors such as the pace of technology diffusion, education levels, product market regulation, and employment protection laws. These differences suggest that policy choices play a crucial role in mediating the effects of AI on labor markets. 

India:

In India, the exposure to AI-driven automation is present but differs from that in advanced economies. A study analyzing worker-level data from countries including India revealed that advanced economies face higher exposure to AI due to a greater employment share in professional and managerial occupations. In contrast, emerging markets like India have a different occupational structure, which may result in varying degrees of AI impact on employment. 

Nordic Countries:

Specific data on AI-induced layoffs in the Nordic countries over the past five years is limited. However, these countries are known for their proactive labor policies and strong social safety nets, which may mitigate the adverse effects of AI on employment. The positive impact of AI-enabled automation on employment holds across countries with only a few exceptions. But the magnitude of the estimates varies substantially across countries, possibly reflecting differences in underlying economic factors, such as the pace of technology diffusion and education, but also in the level of product market regulation (competition) and employment protection laws. 

Policy Implications:

The varying impacts of AI on employment across these regions underscore the critical role of policy choices in shaping labor market outcomes. Regions with robust employment protection laws, active labor market policies, and strong social safety nets may better manage the transition toward AI-driven automation. Conversely, areas with less supportive policies may experience more significant labor displacement. Proactive measures, such as investing in education and reskilling programs, promoting technological literacy, and ensuring fair labor practices, are essential to harness the benefits of AI while minimizing its potential drawbacks on employment.)

Over the past five years, the Nordic countries—Denmark, Finland, Norway, and Sweden—have actively integrated artificial intelligence (AI) and automation into various sectors. While comprehensive data on AI-induced layoffs in these nations is limited, certain trends and instances highlight the evolving landscape.

AI Adoption in the Nordic Region:

   •   Enterprise Integration: The Nordic region has witnessed a shift from AI startup initiatives to substantial enterprise implementations. This transition underscores a maturing AI landscape, with large enterprises leading the adoption curve. 

   •   Investment Trends: Despite a general decline in venture capital investments in AI across the Nordics in recent years, Sweden stands out as an exception, maintaining or increasing its investment levels. A significant challenge identified in scaling AI solutions is the scarcity of skilled talent. 










Comments