Ten Years of “Heart of the Machine”: The Ongoing Evolution of Emotion AI
It was ten years ago this month that I finished writing “Heart of the Machine,” my best-selling book about the future of artificial emotional intelligence. After years of researching and writing articles about this then-very-new field, it was a thrill to finally hand the manuscript to my publisher and begin the next stage of the author’s journey.
Since then, the world has seen many advances and changes that have impacted the field. Ever more powerful smart phones and other communications devices. The spread of advanced cloud services. And, of course, the rise of generative AI. All of these have transformed our emotional connections with our technologies, often in ways people aren’t even aware of.
Emotion is such a key aspect of who we are, a core feature of the human condition. Technology that can read, interpret and interact with our emotions has so much potential to improve our lives. Altering software behavior when we become frustrated with it. Changing our personal environment according to our moods and preferences. Aiding students who feel too challenged or not challenged enough. Offering assistance when depression or other mental challenges set in. It seemed like the sky was the limit given that emotion informs nearly every aspect of our lives.
A decade ago, I had a lot of hope for these technologies, and I still do. But it’s much clearer today that despite the warnings I and others made at the time, the pace of regulation and the commitment to safeguarding human values fell far short of what was needed. Emotion AI can still benefit healthcare, education, relationships, entertainment, and much more. But increasingly, it’s also being turned to activities such as predatory marketing practices, state- and corporate-sponsored surveillance, and online identity scams. What can we do to protect ourselves and our society from this kind of intrusion?
This isn’t happening just because of recent advances in artificial intelligence. Certainly, the explosion of new capabilities in generative AI, particularly transformers, large language models and other multimodal models has added to the challenge. But many concerns stem from more systemic issues, including the immense concentration of wealth, power and resources within a handful of Big Tech companies over the past decades, the balance of closed versus open AI models, and the growing ability of foreign state actors to manipulate public sentiment to influence and interfere with sovereign elections.
From its beginnings, emotion AI relied on multiple inputs, but it was heavily dependent on visual data from facial expressions. Initially, this was done with a person’s permission using webcams and smartphones. But as facial recognition has become increasingly ubiquitous, we’re seeing it applied in more and more settings, frequently without our consent.
Emotion AI technologies are increasingly being used in hiring interviews, in contact centers, and for real-time workplace performance evaluations. It’s increasingly finding its way into loan application processes where it’s used to analyze borrower behavior, voice tone, and text sentiment in real time to detect financial stress, prevent fraud, and gauge applicant confidence during digital interactions. In some companies’ hiring processes, systems analyze prospective candidates’ micro-expressions, tone, and body language to generate an “employability” score or to assess if a candidate matches the company’s desired soft skills or cultural profile.
While some of these assessments may have an option to opt-out, frequently this can lead to further scrutiny or even dismissal.
Part of the problem is that the technology is being developed too rapidly for regulators to keep up with these new encroachments. Meanwhile, the science behind many vendors’ emotion-aware systems and services is less than sound, suggesting that the people they assess are frequently being unfairly treated.
At the same time, many researchers in the field find that the hype around and misrepresentation around emotion AI is negatively impacting their own work.
In the coming weeks, I’ll be exploring the hopes and hazards for emotion AI. I think you’ll be surprised at all the places where it’s already being used.
There are still many ways emotion AI can better our world and our lives, but it has to be done in a considered manner that upholds individual dignity and prioritizes human welfare over technology. The “move fast and break things” ethos of the tech world only benefits a small handful of players. For the rest of us, we’ll need to approach these developments with care, wisdom and an eye to humanistic values, if we’re to build a future that serves us all.
