Technology

Three in four Europeans support the use of AI by police and military, study says

Three in four Europeans support the use of AI by police and military, study says

Peter Cade | Stone | Getty Images

The vast majority of Europeans support the use of artificial intelligence for police and military operations, according to a new report by Madrid’s IE University shared with CNBC.

“European Tech Insights,” which measured the attitudes of over 3,000 people in Europe, found that 75% support the use of AI technologies such as facial recognition and biometric data by the police and military for surveillance purposes.

The extent of the support is perhaps surprising, as Europe holds some of the strictest data privacy regulations in the world. In 2018, the European Union introduced the General Data Protection Regulation, or GDPR — a framework that governs the way organizations store and process users’ information.

Firms face hefty fines for violating the rules. A company in breach of GDPR laws can be fined up to 4% of their annual global revenues, or 20 million euros ($21.7 million), whichever is the higher amount.

“It is not clear that the public has thought about the ramifications of these [AI] applications,” Ikhlaq Sidhu, dean of the IE University’s School of Science and Technology, told CNBC.

The level of support for the use of AI in public service tasks, such as traffic optimization, was even higher, according to the report, coming in at 79%.

However, when it comes to sensitive matters, like parole decisions, most Europeans (64%) oppose the use of AI.

AI manipulation of elections

Despite support for AI in public administration and security matters, people appear to be much more concerned about its role in the democratic process.

IE University’s report found that the vast majority of Europeans (67%) fear AI manipulation in elections.

AI can be used as an amplifier of misinformation, with some users deliberately trying to use false information to subvert the opinions of others. A key concern is that so-called deepfakes, synthetic images, videos or audio clips created using AI could be used to misrepresent politicians’ views or spread other kinds of misinformation.

Generative AI platforms, such as OpenAI’s Dall-E and Stability AI’s Midjourney, can be used to create images with just a few lines of text prompts, for example. CNBC has reached out to OpenAI and Stability for comment.

“AI and deep fakes are the latest examples of a trend of misinformation and loss of verifiability,” Sidhu told CNBC. “This trend has been growing since the beginning of the Internet, social media,  and AI-driven search algorithms.”

Indeed, some 31% of Europeans think that AI has already influenced their voting decisions, according to the report. It comes as the 2024 U.S. election is fast approaching, with current Vice President Kamala Harris running against former President Donald Trump in the vote set for Nov. 5.

Generational divide

IE University’s report also found a generational AI divide in Europe.

Roughly a third (34%) of people aged between 18 and 34 would trust an AI-powered app to vote for politicians on their behalf. This figure falls to 29% for people aged 35 to 44, and just 9% for individuals aged 65 and over.