Warning against Emotional Analysis Technology from U.K. Watchdog

Share This Post

“Developments in the biometrics and emotional AI market are immature. They may not work yet, or they may never work,” wrote ICO Deputy Commissioner Stephen Bonner. “While there are opportunities, the risks are greater at this time. At the ICO, we are concerned that incorrect analysis of data could lead to assumptions and judgments about an individual that are inaccurate and lead to discrimination.”

The Information Commissioner’s Office, the U.K.’s top data protection watchdog, has issued a strong warning to companies against using so-called “emotion analytics” technology, saying it is still “immature” and that the associated risks far outweigh any potential benefits, in a first of its kind announcement.

Emotion analysis, also known as emotion recognition or affect recognition, follows similar principles to better-known biometric techniques such as facial recognition, but is arguably even less reliable. Emotion analysis, or emotion recognition, systems scan people’s facial expressions, voice tone or other physical features and then try to infer mental states or predict how someone is feeling based on those data points.

USC-Annenberg research professor Kate Crawford describes some of the pitfalls of this approach in her 2021 book Atlas of AI.

“The difficulty of automating the link between facial movements and basic emotional categories leads to the larger question of whether emotions can be adequately classified into a small number of discrete categories at all,” Crawford writes. “There is the persistent problem that attempts at facial expression tell us little about our honest internal state, as anyone who has ever smiled without feeling truly happy can attest.”

Bonner goes on to say that “the only sustainable biometric applications” are those that are fully functional, accountable, and “scientifically backed.” Although the ICO has issued warnings about specific technologies in the past, including some that fall into the category of biometrics, Bonner told The Guardian that this week’s announcement is the first general warning about the ineffectiveness of an entire technology. In the article, Bonner called attempts to use biometric data to detect emotions “pseudoscientific.”

Read More:

Partnership Between Mitsubishi Electric and Nozomi Networks Strengthens Operational Technology Security Business

Mitsubishi Electric and Nozomi Networks Partnership Mitsubishi Electric and Nozomi...

Solidion Technology Inc. Completes $3.85 Million Private Placement Transaction

**Summary:** 1. Solidion TechnologyInc. has announced a private placement deal...

Analyzing the Effects of the EU’s AI Act on Tech Companies in the UK

Breaking Down the Impact of the EU’s AI Act...

Tech in Agriculture: Roundtable Discusses Innovations on the Ranch

Summary of Tech on the Ranch Roundtable Discussion: ...

Are SMEs Prioritizing Tech Investments Over Security Measures?

SMEs Dive Into Tech Investments, But Are...

Spotify Introduces Music Videos for Premium Members in Chosen Markets

3 Summaries of Spotify Unveils Music Videos for Premium...

Shearwater to Monitor Production at Equinor’s Two Oil Platforms

Shearwater GeoServices secures 4D monitoring projects from Equinor for...

Regaining Europe’s Competitive Edge in Innovation: Addressing the Innovation Lag

Europe’s Innovation Lag: How Can We Regain Our Competitive...

Related Posts

Government Warns of AI-Generated Content: Learn More about the Issue

Government issued an advisory on AI-generated content. All AI-generated content...

Africa Faces Internet Crisis: Extensive Outage Expected to Last for Months, Hardest-Hit Nations Identified

Africa’s Internet Crisis: Massive Outage Could Last Months, These...

FTC Investigates Reddit for AI Content Licensing Practices

FTC is investigating Reddit's plans...

Journalists Criticize AI Hype in Media

Summary Journalists are contributing to the hype and...