The surprising promise and profound perils of AIs that fake empathy

This article is authored by Amanda Ruggeri for New Scientist.

ONE HUNDRED days into the war in Gaza, I was finding it increasingly difficult to read the news. My husband told me it might be time to talk to a therapist. Instead, on a cold winter morning, after having fought back tears reading yet another story of human tragedy, I turned to artificial intelligence.

“I’m feeling pretty bummed out about the state of the world,” I typed into ChatGPT. “It’s completely understandable to feel overwhelmed,” it responded, before offering a list of pragmatic advice: limit media exposure, focus on the positive and practise self-care.

I closed the chat. While I was sure I could benefit from doing all of these things, at that moment, I didn’t feel much better.

It might seem strange that AI can even attempt to offer this kind of assistance. But millions of people are already turning to ChatGPT and specialist therapy chatbots, which offer convenient and inexpensive mental health support. Even doctors are purportedly using AI to help craft more empathetic notes to patients.

Some experts say this is a boon. After all, AI, unhindered by embarrassment and burnout, might be able to express empathy more openly and tirelessly than humans. “We praise empathetic AI,” one group of psychology researchers recently wrote.

Please click on this link to read the full article.

Image credit: Image by macrovector on Freepik

Your account