Analysis AI and privacy

Can we still protect our data in the artificial intelligence era?

Europe wants to be a leader in tech revolutions like AI. Yet this ambition contrasts sharply with the EU’s desire to protect the right to privacy - because AI needs lots of data. A new European ruling promises to make the objectives compatible, but does not resolve the conundrum.

Published on 17 February 2022 at 11:41

It's 2016. Donald Trump has won the United States presidency and Brexit has promised to take the UK out of the European Union. Both campaigns employ Cambridge Analytica, which harvested the data of millions of Facebook users to personalise electoral messaging to them and sway their voting intentions. Millions of people begin to ask themselves whether, in the digital era, they have lost something deeply valuable: their privacy. 

Two years later, countless European email inboxes would be filling up with messages from companies, asking people for permission to continue processing their data – the aim was compliance with the new General Data Protection Regulation (GDPR). Despite its imperfections, this law has served as a point of reference for laws in Brazil and Japan, and inaugurated the modern era of data protection. 

Nevertheless, what was once seen as a triumph for privacy is now perceived as a roadblock in Europe’s quest to develop digital technologies, especially artificial intelligence (AI). Can European law protect the privacy of its citizens when faced with such an opaque technology?

Prioritise digital rights or innovation?

An AI system is an information technology (IT) tool  which uses algorithms to generate correlations, predictions, decisions and recommendations. Its capacity to affect human decisions puts AI at the very heart of the data economy. 

AI’s more efficient decision-making also has geopolitical consequences. States are investing more and more in the technology, driven by the motto coined by Vladimir Putin in 2017: “Whoever dominates artificial intelligence dominates the world”. By 2019, the US was investing almost three times more and Japan over 12 times more in AI than they did in 2015. 

This sense of urgency has spilled over into other areas, including digital rights in Europe. European lawmakers have been legislating for privacy, fighting big tech monopolies and creating standards for secure storage of private data. These advances in digital rights, however, could threaten the economic prosperity of the continent. 


“Whoever dominates artificial intelligence dominates the world.” – Vladimir Putin


When GDPR was first implemented in 2018, companies were already warning that complying with the strict data-protection conditions would be a obstacle to technological innovation. Among the most common arguments against GDPR are that it reduces competition, that compliance is too complicated and that it limits the potential to create European “unicorns” – young startups with more than a billion dollars of market capitalisation. Unicorn investments tend to occur in low-regulation markets. 

On the other hand, Brussels argues that its market of more than 500m people with guarantees of political stability and economic freedom will keep attracting investors. Europe’s own Commissioner for Competition, Margrethe Vestager, added this year that the Commission would only intervene if the fundamental rights of European citizens were endangered

Reconciling AI and privacy

Complying with GDPR can present an additional problem in the development of AI. AI systems need a lot of data to train themselves, but European law limits the capacity of businesses to obtain, share and use this data. By contrast, if this regulation did not exist, the resultant mass harvesting of data would compromise the privacy of citizens. To achieve a balance, GDPR has left a margin for AI development by means of sometimes vague wording in the legislation, according to the pro-privacy European Digital Rights group. 

As expected, there are delicate aspects to this precarious balance. One of them is the principle of transparency, which gives citizens the right to access their data and to understand – in clear and concise terms – what is being done with it. Such transparency can be difficult to maintain, however, when the people processing the data are in fact AI systems. 

Businesses and AI developers have spent time ensuring so-called ‘explicity’ and ‘interpretability’, which is to say that a non-expert should be able to understand an AI system in layman's terms, and recognise why it takes certain decisions and not others. It is not an easy task, since many of these systems work like “black boxes” – a commonly employed metaphor in the industry, implying that neither those who build the algorithm, nor those who implement the decisions it recommends, understand how it comes to those decisions. 


Transparency can be difficult to maintain when the people processing the data are in fact AI systems


Another dilemma is the ‘right to be forgotten’. Celebrated as a GDPR victory for privacy, it obliges businesses to delete the data of anyone who asks for it to be deleted. In the case of AI systems, a business could, in theory, delete the personal data used to train the algorithm, but this would still leave the “trace” that the data left on the system, making a total ‘forgetting’ impossible. 

Is new European regulation the solution?

Although it seems that privacy and innovation are two irreconcilable principles, all is not lost. In April, the European Commission published a proposal to regulate artificial intelligence. Despite much criticism for its particulars, such as its refusal to prohibit facial-recognition systems, it is an innovative piece of legislation that obliges companies to open their black boxes somewhat. As ever, a victory for data-protection activists has angered those who argue that transparency requirements restrain innovation and drive business elsewhere. 


Receive the best of European journalism straight to your inbox every Thursday

In parallel with this initiative, European institutions agreed in October 2021 to form the Data Governance Act. This covers data re-use and creates public “data pools” and cooperatives, so that businesses can benefit from innovating in Europe. Businesses will be allowed to search for the data that they need in these regulated spaces, rather than buying it from other companies or obtaining it through unethical channels such as online dumps. The law also permits “data donation” as a means of filling these pools, something which breaks from the consensus that data is a commodity. It is a groundbreaking vision.

 
The world has still not come to an agreement on AI regulation, but the EU could become a pioneer with a possible law expected in 2022 or 2023 which would apply across its twenty-seven member states. This would set up a risk classification for AI systems. For example, those used in healthcare would be classed as ‘high risk’, meaning tighter regulations on those who develop and implement the systems. The European Data Protection Board and others claim that this new framework will allow for innovation. We will only see its true effect if it can solve the great dilemmas of transparency and the right to be forgotten.

👉 Original article at El Orden Mundial


This article has been produced within the Panelfit project, supported by the Horizon 2020 program of the European Commission.




Was this article useful? If so we are delighted!

It is freely available because we believe that the right to free and independent information is essential for democracy. But this right is not guaranteed forever, and independence comes at a cost. We need your support in order to continue publishing independent, multilingual news for all Europeans.

Discover our subscription offers and their exclusive benefits and become a member of our community now!

Are you a news organisation, a business, an association or a foundation? Check out our bespoke editorial and translation services.

Support independent European journalism

European democracy needs independent media. Join our community!

On the same topic