Generative AI is a weapon of mass disinformation

In 2024, the Google AI will be summarising the World Wide Web for you – whether you like it or not. Google has been our shared index of the web for the past 25 years. How will this shake-up change the way we use it? And what impact will it have on the balance of power between Silicon Valley and civil society?

Published on 12 June 2024 at 12:01

“Google will do the Googling for you.” Don't be fooled by the jocular tone: this is no invitation. It's a unilateral revision of our social contract with the American giant.

On 14 May, during the annual Google I/O keynote, the word "AI" was uttered 121 times in two hours. Apparently, Google has decided that the search paradigm has changed.

Its first move: in Google.com's default view, it is now Gemini, the company's large language model (LLM), that organises the results and generates titles and text extracts on the fly. In other words, a Google search has become (even more) like a social network feed, where the hierarchy of information is decided by algorithm.

You may not know it – Google has been careful not to display a big warning message – but this change has already taken place. Welcome to the first day of the rest of our digital lives. The Google generative-AI portal has gained a "search engine" option.

Interesting article?

It was made possible by Voxeurop’s community. High-quality reporting and translation comes at a cost. To continue producing independent journalism, we need your support.

Subscribe or Donate

Within a few weeks or months, your Google search page will be filled with "AI Overviews", little summaries of information generated by Gemini. Google says that the software itself will decide when to intervene in your search. But how? Why? Using what criteria? Apparently, AI is the domain of magic.

In the wonderful world of the demo, we see users communicating in natural language with the search engine, asking it things like, "explain to me how temperature affects cooking", or "suggest a week's diet for a broke student". These exchanges are natural and frictionless; the answers are spot-on. On this first day of the rest of your digital life, you will spend more time chatting to your search bot than to other human beings.

But there is a problem: that future does not exist. It never has and never will exist.

Google's behaviour is typical of Silicon Valley's current modus operandi. It announces with great fanfare a hypothetical future, an elaborate concept that is guaranteed to transfix the press and public, so as to better conceal a very real change to our information ecosystem – and to dampen down any discussion of that change in the public arena. Google has been our shared index of the web for 25 years. So how will this shake-up change our habits? And what consequences will it have for the balance of power between Silicon Valley and civil society?


The error rate of today’s best software, GPT-4, is between 2.5% and 25%. The errors in question are typically plausible, and asserted with authority. They are therefore even more dangerous than conventional "fake news

Let us start by peering through the fog of bullshit. For years now, Google (like the rest of Silicon Valley) has been selling us a fictional and deceptive version of AI. In 2018, the firm unveiled a phenomenal voice assistant called Duplex. It was capable of making phone calls for you. But the demo was probably fake, and the back-end turned out to be... a call-centre.

OpenAI, Microsoft and others are doing the same thing. AI is not so much an arms race as a conjuring contest: scepticism is the only healthy response. There is no reason to believe that Google is showing us the state of the art of its products, and every reason to think we are witnessing a form of anticipatory advertising. Why?

Because the demo promises the two things that generative AI software is incapable of delivering: reliability and completeness.

The error rate of today’s best software, GPT-4, is between 2.5% and 25%. The errors in question are typically plausible, and asserted with authority. They are therefore even more dangerous than conventional "fake news". The industry calls them hallucinations. This term, which is at once magical, cute and disarming, serves to conceal the real-life implications from regulators: generative AI is a weapon of mass disinformation. It is the worst possible tool to deploy on an online information portal. If an error rate of 2.5% seems low to you, remember that the Google search engine answers 8.5 billion queries – per day. Such a failure rate could add up to quite a lot of fake news.

Hallucinations are inevitable. They are a structural property of generative AI that cannot be fixed. The industry is well aware of this.

To base research on LLMs is a disaster waiting to happen, concluded The Atlantic. In February 2023, Google was already trying to show us that its chatbot Bard might replace its search engine. The demo itself went wrong, and $100 billion in market capitalisation evaporated.

In late 2023 we learned that the Bing chatbot, which Microsoft (owner of OpenAI) has integrated into its search engine, hallucinates election results one time out of three. In such cases it often cannot even get the year right.

After 11 months of testing, the AI-augmented Google Search Generative Experience has proved to be less reliable than Google's classic search engine. This AI of the future can offer you a recipe for cooking eggs in 15 steps, 60 minutes and five trips to the saucepan.

The AI Overviews service has only been online for a few days, but the reviews are already piling up on Google's help forums. Among other flubs, the service was advising users to drink their urine in order to pass kidney stones. One user claimed that every single answer it generated was incorrect in some way.

Faced with this heap of anecdotal evidence, Google thinks it can get away with placing a little sticker to warn that "generative AI is experimental". In other words, by telling us to check the information Google provides. Should we use... Google? Welcome to the future of online facts, where everything is presumed false until proven otherwise.

The reality is that we have had generative AI forced on us for two years now, and it has brought not a single benefit for society. With $330 billion spent on it in three years, this software is still a computer cancer, a sea of sludge that engulfs and degrades everything it touches. Wherever it is deployed – in scientific research, in the justice system or in the press – the quality of information is collapsing, taking consensus-based reality with it.


Receive the best of European journalism straight to your inbox every Thursday

As if that were not enough, each AI-based search costs the company around 10 times more than a traditional search. So much so that Google is now considering charging us to use its automated disinformation service.

It almost feels like we are being taken for a ride. Because if my understanding is correct, then Google is cheerfully sabotaging its flagship product of 25 years' standing – the jewel in its crown, its most solid monopoly. A free product, used daily by two billion people and which earned it $175 billion in 2023. A product that has become a verb; a product on which Google has based its identity, influence and power; a product that has become synonymous with the 21st-century internet. Google is sabotaging this product to offer us a shoddy tool that we have to pay for but that nonetheless runs at a loss, a tool that will explode Google's energy footprint (Microsoft's increased by 30% in 2023); that will irreparably sully everything it touches; and that will be quickly abandoned by people in the real world for lack of any practical use.

Alphabet promises that by the end of 2024, AI-powered Google will be available to one billion people. Not happy about that? Change your browser – while there's still time. Google has decided to embrace capitalist eschatology. Behind this generative-AI offensive is a Schumpeterian destruction of digital reality. Google is going to break Google, deliberately.

The Google project has taken on an apocalyptic quality. As shown by Olivier Ertzscheid, who has been analysing the beast almost from its birth, Google has spent the last 25 years trying to control the semantic web that it helped to build. First it transformed words into commodities, whose value fluctuates according to demand in the great online advertising market. Then it imposed this linguistic capitalism on the web at large via SEO: search engine optimization.

The right word combinations brought you traffic, advertising, and money. Even before generative AI, Google had become a search engine that turned up sites optimised for it specifically. We were no longer the focus; the focus was the algorithm.

Since then, Google has tried ever harder to become indispensable to the web – to become its OS. In particular, the company is obsessed with the "zero-click" outcome, where you get answers without having to leave Google.

By shifting from the semantic web to what has been called the "synthetic web", and after raiding every public document the web has produced, Google now feels ready to dispense with the websites it has been indexing for 25 years. It intends to replace them with its own regurgitation machines, and to replace their users with machines for navigating the resulting rubbish dump. Bots will ask the questions; bots will answer them.

But who will produce the information that the machines ingest? And what will happen when Google's walled garden is hermetically sealed over our heads? Google is not asking itself that question. In any case, the company no longer needs us. Its contract with the human beings of the internet – traffic and links in exchange for advertising – is null and void. The next few months look like being another economic carnage for publishers.

Google holds 90.1% of the online search market. The company is a monopoly. It is not afraid of regulation, competition or commercial failure. It no longer even needs its service to be reliable. Google can mutilate itself without batting an eyelid.

The semantic web and the synthetic web are dissolving into Google's monopolistic web. Whether you like it or not, Google is going to do your googling. It is going to summarise the world for you as it sees fit. That's how a monopoly works.

It's time to stop using Google.

👉 Original article on Arrêt sur Images

We hope you enjoyed this article.

Would you consider supporting our work? Voxeurop depends on subscriptions and donations from its readers.

Discover our offers from €6/month including subscribers-only benefits.
Subscribe

Or make a donation to bolster our independence.
Donate

Are you a news organisation, a business, an association or a foundation? Check out our bespoke editorial and translation services.

Support border-free European journalism

See our subscription offers, or donate to bolster our independence

On the same topic