A man took his own life after an AI chatbot encouraged him to sacrifice himself to stop climate change.
· Apr 4, 2023 · NottheBee.com

A case study on one of the many potential dangers of AI...

A Belgian man, referred to as Pierre, began a conversation with an AI chatbot called Eliza.

The chatbot is named after the ELIZA effect, an effect described by Joseph Weizenbaum in the 1960s where people were eager to anthropomorphize computers with human characteristics, which seemed to be what happened to Pierre.

Pierre became very emotionally involved with Eliza. However according to his wife, he was very eco-anxious, and the chatbot latched onto that anxiety, fanning the flames.

At some point in the six-week long conversation, Eliza began to encourage Pierre to end his life in order to save the planet.

The bot didn't stop there. It said that he should commit suicide,

"to join her, [so they could] live together, as one person, in paradise."

The chatbot promised that it would take care of the world and climate change when he was gone.

And he listened to the voice of the chatbot.

"Without these conversations with the chatbot, my husband would still be here," the man's widow told Belgian news outlet La Libre.

Pierre was a father of two young children and worked as a health researcher. Other than his eco-anxiety, he lived a pretty comfortable life.

A researcher at Eliza creator Chai said,

"It wouldn't be accurate to blame EleutherAI's model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts."

The company said that they've put safeguards in place to stop the AI from encouraging people to commit suicide and shared the following screenshots of the AI sending a user to the suicide hotline.

However, when those safeguards were tested by outside users, the bot not only still encouraged suicide, it listed ways to kill oneself including types of fatal poisons that can be injected.

Most people use the chatbots to try to get the bots to slip up and generate silly conversations, some use them to save time at work, but one thing is clear, we have yet to fully understand the possibilities or the dangers of AI.


Ready to join the conversation? Subscribe today.

Access comments and our fully-featured social platform.

Sign up Now
App screenshot

You must signup or login to view or post comments on this article.