zap-Voxeurop-AI

Evgeny Morozov: AI, neoliberalism’s hip friend

Artificial-intelligence enthusiasts promise that the technology will solve all problems. Their enthusiasm comes from the same neoliberal prejudices that are at the root of many of those problems, believes Evgeny Morozov. In this technologist's view, we must not give up on the role of institutions that foster human intelligence.

Published on 18 October 2023

Alongside the war in Ukraine, the new wave of the migration crisis, and the ongoing disaster of climate change, artificial intelligence (AI) is one of the subjects that have dominated the world this year. Depending on who you listen to, it will either solve all of humanity’s problems – or it will destroy civilization as we know it. Here is our first paradox: sometimes, it’s the very same people who say both things. And, surprisingly, they mostly come from Silicon Valley… 

Let’s go through some of the main AI-related developments this year, just to get a glimpse of what’s going on. I’ve described many of them in a long essay that appeared in the New York Times earlier this year. 

Evgeny Morozov, Internazionale a Ferrara 2023.
The author in Ferrara, during the International Festival 2023. | Photo: Gian-Paolo Accardo.

Let me give away the basic premise here: our current infatuation with the promise of AI is, in many ways, just an extension of our infatuation with the market and neoliberalism. There’s no way to understand why so many public institutions are falling for the sweet promises of AI pushers other than situate this marketing push in the broader history of privatizing solutions to what are otherwise public and collective problems. So to solve our problems via AI today is tantamount to solving them by the market. Personally, I find it problematic – and I hope that you do too. But this connection – between today’s AI and neoliberalism – is not well understood. So let me explain this a bit better. 

In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned.

This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.


Neoliberalism is far from dead. Worse, it has found an ally in AGI-ism, which stands to reinforce and replicate its main biases


Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, senators called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders.

The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or AGI, that worries the experts.

AGI doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting – some say impossible – task. But the benefits appear truly tantalizing.

Take Roombas, the smart vacuum cleaners. No longer condemned to vacuuming the floors, they might evolve into all-purpose robots, happy to brew morning coffee or fold laundry. The charm here is that, with AGI, they’ll be able to do it without ever being programmed to do these things.

Sounds appealing. But should these AGI Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. After all, it’s us, the humans, who are the cause of all that dust – and, once again, since these Roombas are never properly programmed, they might reason that eliminating humans is one way to keep the household clean. So, think twice before getting yourself an AGI-powered Roomba. 

Discussions of AGI are rife with such apocalyptic scenarios. Yet a nascent AGI lobby of academics, investors and entrepreneurs counter that, once made safe, AGI would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that AGI might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”

This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral. So the risks of AGI destroying the plane pale in comparison with its promises: to cure cancer or to eliminate illiteracy or to give all of us a personal secretary. 

All these tech visionaries are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence. Let me repeat that: for them, the path to boosting intelligence of our civilization lies by means of finetuning data models, feeding them with more and better data. It’s mostly a technical program. 

But the broader ideology driving this effort — call it AGI-ism — is mistaken. The real risks of AGI are political and won’t be fixed by taming rebellious robots. The safest of AGIs would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, AGI-ism distracts from finding better ways to augment intelligence. I’ll talk about some of them towards the end of my talk. 

Unbeknown to its proponents, AGI-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market. For the technology companies that are building AGI are not some humanitarians in the vein of United Nations or Mother Theresa – they are profit-oriented capitalists keen on promoting the institution of the market as the main way of organizing our society. 

This is why, rather than breaking capitalism, as Mr. Altman has hinted it could do, AGI — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.

Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamiseand transform a stagnant and labor-friendly economy through markets and deregulation.


Receive the best of the independent European journalism straight to your inbox every Thursday

Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.

It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong Foundations, think tanks and academics have even dared to imagine a post-neoliberal future. Unfortunately, in Europe such pronouncements are very rare, even if Emmanuel Macron has recently made headlines with his talk about ecological planning as a way of resolving the climate crisis.

Yet neoliberalism is far from dead. Worse, it has found an ally in AGI-ism, which stands to reinforce and replicate its main biases. Let me focus on three in this talk. First is the idea that private actors tend to outperform public ones (what I call  the market bias). Second is the classical neoliberal idea that adapting to reality is more important than transforming it (we can call it the adaptation bias). Third is the idea that the need to maximiseefficiency always trumps social concerns, particularly those related to social justice (we can call it the efficiency bias).

These biases turn the alluring promise behind AGI on its head: Instead of saving the world, the quest to build it will make things only worse. 

The bias of the market

Let’s tackle the very first bias – that of the market, namely the idea that we should turn to private providers of services and place them above market ones. Remember when Uber, with its cheap rates, was courting cities to serve as their public transportation systems?

It all began nicely, with Uber promising implausibly cheap rides, courtesy of a future with self-driving cars and minimal labor costs. Deep-pocketed investors loved this vision, even absorbing Uber’s multibillion-dollar losses.

But when reality descended, the self-driving cars were still a pipe dream. The investors demanded returns and Uber was forced to raise prices. Users that relied on it to replace public buses and trains were left on the sidewalk.

The neoliberal instinct behind Uber’s business model is that the private sector can do better than the public sector. This is the core ofthe market bias.

It’s not just cities and public transit. Look at the United States: hospitalspolice departments and even the Pentagon increasingly rely on Silicon Valley to accomplish their missions.

With AGI, this reliance will only deepen, not least because AGI is unbounded in its scope and ambition. It promises to fulfill any task without ever being taught to do it. No administrative or government services would be immune to its promise of disruption.

Moreover, AGI doesn’t even have to exist to lure them in. This, at any rate, is the lesson of Theranos, a start-up that promised to “solve” health care through a revolutionary blood-testing technology and a former darling of America’s elites. Its victims are real, even if its technology never was.

After so many Uber- and Theranos-like traumas, we already know what to expect of an AGI rollout. It will consist of two phases. First, the charm offensive phrase, whereby users are offered heavily subsidized services all while being told that their low cost is the result of genius innovators inventing new ways to do old things. Then comes the ugly retrenchment phase, with the overdependent users and agencies shouldering the costs of making these services profitable.

As always, Silicon Valley mavens play down the market’s role. Like true populists, they assure us that this new kind of artificial intelligence is all about putting people first. In a recent essay titled “Why A.I. Will Save the World,” Marc Andreessen, a prominent tech investor, even proclaims that A.I. “is owned by people and controlled by people, like any other technology.”

Only a venture capitalist can traffic in such exquisite euphemisms. Most modern technologies are owned by corporations. And they — not the mythical “people” — will be the ones that will monetisesaving the world.

And are they really saving it? The record, so far, is poor. Companies like Airbnb and TaskRabbit were welcomed as saviors for the beleaguered middle class, promising to turn our apartments and free time into ATM machines. Tesla’s electric cars were seen as a remedy to a warming planet. Soylent, the meal-replacement shake, embarked on a mission to “solve” global hunger, while Facebook vowed to “solve” connectivity issues in the Global South. None of these companies saved the world.

A decade ago, I called this solutionism, but “digital neoliberalism” would be just as fitting of a name to describe this pseudo-humanitarianism of start-ups and venture capitalists. This worldview reframes social problems in light of for-profit technological solutions. As a result, concerns that belong in the public domain are reimagined as entrepreneurial opportunities in the marketplace.


AGI-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They are a cost on society – not its enabler


AGI-ism has rekindled this solutionist fervor. Last year, Mr. Altman, the co-founder of OpenAI, stated that “AGI is probably necessary for humanity to survive” because “our problems seem too big” for us to “solve without better tools.” He’s recently asserted that AGI will be a catalyst for human flourishing.

But companies need profits, and such benevolence, especially from unprofitable firms burning investors’ billions, is uncommon. OpenAI, having accepted billions from Microsoft, has contemplated raising another $100 billion to build AGI Those massive investments will need to be earned back — against the service’s staggering invisible costs. (One estimate from February put the expense of operating ChatGPT at $700,000 per day.)

Thus, the ugly retrenchment phase, with aggressive price hikes to make an AGI service profitable, might arrive before “abundance” and “flourishing.” That is, the ugly reality – that these services are unprofitable and survive only because their investors are subsidizing them – will kick in before they manage to solve all the problems of the world. But how many public institutions would mistake fickle markets for affordable technologies and become dependent on OpenAI’s expensive offerings by then?

And if you dislike your town outsourcing public transportation to a fragile start-up, would you want it farming out welfare services, waste management and public safety to the possibly even more volatile AGI firms?

The bias of adapting

Let’s now tackle the second bias – one insisting that adapting to reality is far superior to trying to transform it. As a result of this bias, neoliberalism has a knack for mobilizing technology to make society’s miseries bearable. I recall an innovative tech venture from 2017 that promised to improve commuters’ use of a Chicago subway line. It offered rewards to discourage metro riders from traveling at peak times. Its creators leveraged technology to influence the demand side (the riders), seeing structural changes to the supply side (like raising public transport funding) as too difficult. Tech would help make Chicagoans adapt to the city’s deteriorating infrastructure rather than fixing it in order to meet the public’s needs.

This is the adaptation bias — the aspiration that, with a technological wand, we can become desensitized to our plight. It’s the product of neoliberalism’s relentless cheerleading for self-reliance and resilience.

The message is clear: gear up, enhance your human capital and chart your course like a start-up. And AGI-ism echoes this tune. Bill Gates has trumpeted that A.I. can “help people everywhere improve their lives.” This comes only a decade after we were encouraged to learn how to code. This, too, was supposed to help people everywhere improve their lives. It’s all about offloading the costs of solving big social and political problems on the shoulders of the individuals. 

The solutionist feast is only getting started: Whether it’s fighting the next pandemic, the loneliness epidemic or inflation, A.I. is already pitched as an all-purpose hammer for many real and imaginary nails. However, the decade lost to the solutionist folly reveals the limits of such technological fixes.

To be sure, Silicon Valley’s many apps — to monitor our spending, calories and workout regimes — are occasionally helpful. But they mostly ignore the underlying causes of poverty or obesity. And without tackling the causes, we remain stuck in the realm of adaptation, not transformation. So, we get the illusion of doing something about solving these problems, but, in fact, we are only tinkering at the margins; we are making our tragedy more livable – but not much beyond that. 

There’s a difference between nudging us to follow our walking routines — a solution that favors individual adaptation — and understanding why our towns have no public spaces to walk on — a prerequisite for a politics-friendly solution that favors collective and institutional transformation.

But AGI-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They are a cost on society – not its enabler. They should just adapt to AGI, at least according to Mr. Altman, who recently said he was nervous about – and I quote – “the speed with which our institutions can adapt” — part of the reason, he added, - and I quote again – “of why we want to start deploying these systems really early, while they’re really weak, so that people have as much time as possible to do this.”

But should institutions only adapt? Can’t they develop their own transformative agendas for improving humanity’s intelligence? Or do we use institutions only to mitigate the risks of Silicon Valley’s own technologies? Isn’t some of the most cherished innovations in human history institutions – be it the university, the library, the museum, or the welfare state? 

The bias of efficiency

Finally, let’s tackle the third bias. A common criticism of neoliberalism is that it has flattened our political life, rearranging it around efficiency. “The Problem of Social Cost”, a 1960 article that has become a classic of the neoliberal canon, preaches that a polluting factory and its victims should not bother bringing their disputes to court. Such fights are inefficient — who needs justice, anyway? — and stand in the way of market activity. Instead, the parties should privately bargain over compensation and get on with their business. It’s all like this for neoliberals: justice and politics stand in the way of doing business and earning profits. 

This fixation on efficiency is how we arrived at “solving” climate change by letting the worst offenders continue as before. The way to avoid the shackles of regulation is to devise a scheme — in this case, taxing carbon — that lets polluters buy credits to match the extra carbon they emit.

This culture of efficiency, in which markets measure the worth of things and substitute for justice, inevitably corrodes civic virtues.

And the problems this creates are visible everywhere. Academics fret that, under neoliberalism, research and teaching have become commodities. Doctors lament that hospitals prioritisemore profitable services such as elective surgery over emergency care. Journalists hate that the worth of their articles is measured in eyeballs.

Now imagine unleashing AGI on these esteemed institutions — the university, the hospital, the newspaper — with the noble mission of “fixing” them. Their implicit civic missions would remain invisible to AGI, for those missions are rarely quantified even in their annual reports — the sort of materials that go into training the models behind AGI

After all, who likes to boast that his class on Renaissance history got only a handful of students? Or that her article on corruption in some faraway land got only a dozen page views? Inefficient and unprofitable, such outliers miraculously survive even in the current system. The rest of the institution quietly subsidizes them, prioritizing values other than profit-driven “efficiency.”

Will this still be the case in the AGI utopia? Or will fixing our institutions through AGI be like handing them over to ruthless consultants? They, too, offer data-bolstered “solutions” for maximizing efficiency. But these solutions often fail to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.

In fact, the remarkable performance of ChatGPT-like services is, by design, a refusal to grasp reality at a deeper level, beyond the data’s surface. So whereas earlier A.I. systems relied on explicit rules and required someone like Newton to theorisegravity — to ask how and why apples fall — newer systems like AGI simply learn to predict gravity’s effects by observing millions of apples fall to the ground.

However, if all that AGI sees are cash-strapped institutions fighting for survival, it may never infer their true ethos. Good luck discerning the meaning of the Hippocratic oath by observing hospitals that have been turned into profit centers.

To conclude, let me offer some general reflections on what we can do differently. 

Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”

The AGI lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.

But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.

It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about AGI.

A Manhattan Project for culture and education

However, if AGI-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. AGI’s grand project of amplifying intelligence may end up shrinking it.

Because of such solutionist bias, even seemingly innovative policy ideas around AGI fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety”. This is premised on the false idea that there’s no alternative to AGI

But wouldn’t our quest for augmenting intelligence be far more effective if governments funded a Manhattan Project for culture and education and the institutions that nurture them instead? Why not conceive of intelligence as something that happens as citizens interact – with each other but also with other institutions? Wouldn’t this more social and distributed conception of intelligence lead us to a very different set of policies, including those that would allow us to see that Silicon Valley often stands in the way of augmenting true intelligence? 

Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for AGI start-ups, reinforcing the falsehood that society doesn’t exist.

Depending on how (and if) the robot rebellion unfolds, AGI may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, AGI-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

This text is a transcript of the speech given by Evgeny Morozov during Internazionale’s festival in Ferrara 2023

Do you like our work?

Help multilingual European journalism to thrive, without ads or paywalls. Your one-off or regular support will keep our newsroom independent. Thank you!

More comments Become a member to translate comments and participate

Are you a news organisation, a business, an association or a foundation? Check out our bespoke editorial and translation services.

Support border-free European journalism

Donate to bolster our independence

Related articles