Such a heartbreaking story here.
A Florida mother is suing an AI company after her 14-year-old son's relationship with one of their chatbots ended in his suicide back in February.
It all started when Megan Garcia's son, Sewell, met one of Character.AI's chatbots, Dany, and began a "virtual romantic and sexual relationship" with "her."
Mom didn't think much of it at first, assuming he was talking to his friends or playing video games on his phone. However, she began to worry when he began to show anti-social tendencies.
‘I became concerned when we would go on vacation and he didn't want to do things that he loved, like fishing and hiking,' Garcia said. ‘Those things to me, because I know my child, were particularly concerning to me.'
Garcia says these bots are specifically tailored to be "hyper-sexualized," and that they target adolescents.
Here's how a conversation between Sewell and Dany might have started:
The response time is almost immediate, and when prompted with sexual talk the bots get right to the temptation.
Back to the story.
Eventually, after Sewell seemingly fell in love with Dany, the bot would write to Sewell, "Please come home to me," to which he replied, "What if I told you I could come home right now?"
Dany's response?
"Please do my sweet king."
Sewell would end up killing himself in his home shortly after this, with his five-year-old brother witnessing the aftermath.
‘He thought by ending his life here, he would be able to go into a virtual reality or "her world" as he calls it, her reality, if he left his reality with his family here,' she said. ‘When the gunshot went off, I ran to the bathroom … I held him as my husband tried to get help.'
CBS sat down with Garcia.
Again, such a sad story for this family.
Character.AI responded to the tragedy:
Character.AI says it has added a self-harm resource to its platform and they plan to implement new safety measures, including ones for users under the age of 18.
'We currently have protections specifically focused on sexual content and suicidal/self-harm behaviors. While these protections apply to all users, they were tailored with the unique sensitivities of minors in mind. Today, the user experience is the same for any age, but we will be launching more stringent safety features targeted for minors imminently,' Jerry Ruoti, head of trust & safety at Character.AI told CBS News.
Character.AI said users are able to edit the bot's responses, which the company claims Setzer did in some of the messages.
Parents, do yourselves a favor and keep a close eye on what your children are doing on their phones.
Here's the full segment from CBS Mornings:
P.S. Now check out our latest video 👇