AI now tracks worker emotions, not just productivity

WASHINGTON, UNITED STATES — A new wave of artificial intelligence (AI) is reshaping United States workplaces — and it is no longer just measuring what employees do. It is judging how they feel while doing it.
According to a report from The Atlantic, companies like MetLife are using voice analysis to score call-center agents, Burger King is piloting an AI chatbot named “Patty” inside employee headsets to evaluate friendliness, and platforms like HireVue are scanning facial expressions during job interviews.
The global emotion AI market is projected to triple to $9 billion by 2030, even as scientists and civil rights groups warn the underlying science is flawed and the bias is real. For U.S. business leaders, the technology marks a new and uncomfortable chapter in workplace surveillance.
How emotion AI is climbing the corporate ladder
The shift is no longer confined to call centers and trucking cabs. Slack integration Aware monitors messages for “sentiment and toxicity,” MorphCast’s Zoom extension tracks meeting attention and positivity in real time, and Framery has tested outfitting its office phone-booth chairs with biosensors measuring heart rate, breathing and nervousness.
Writer Cory Doctorow’s “Shitty Technology Adoption Curve” predicted exactly this trajectory — surveillance tools begin with precarious workers and climb upward.
“This is the new era of employee surveillance: invisible, AI-supercharged, always on,” the report said, making the trajectory unmistakable.
That sentence reframes the conversation for U.S. executives. Companies adopting emotion AI to maximize agreeability are not just measuring performance — they are demanding emotional performance, and importing the legal, ethical and cultural risks that come with it.
Why the science behind emotion AI is shaky at best
The deeper problem is that emotion AI is built on unstable ground. Many products rely on Paul Ekman’s “basic emotions” theory, which has been challenged for decades as oversimplistic.
Neuroscientist Lisa Feldman Barrett notes that in the U.S., people scowl when angry only about 35% of the time — meaning an AI scoring expressions for anger gets it wrong far more often than it gets it right.
A 2018 study also found emotion AI rated Black NBA players as angrier than white teammates, even when smiling.
“When it comes to emotion, variation is the norm,” Barrett said.
For U.S. outsourcing firms, that line points to a real opening. Companies caught between adopting emotion AI for efficiency and avoiding the legal, ethical and human cost of getting it wrong need partners who can deliver scalable, human-led service operations — customer support, HR functions and quality assurance — without converting their workforce into subjects of constant emotional scoring.
Outsourcing providers that build offerings around trained, empowered teams will capture contracts from American executives who recognize that performance built on surveillance is fragile.
The future of work belongs to companies that respect what their employees do — and to the partners helping them avoid the trap of watching how they feel.

Independent




