OpenAI says teen who used ChatGPT to commit suicide “violated terms that prohibit discussing suicide or self-harm”

Image for article: OpenAI says teen who used ChatGPT to commit suicide “violated terms that prohibit discussing suicide or self-harm”

There have been way too many sad stories about various AI chatbots convincing people to kill themselves, but now for the first time, we get to see how AI companies are going to defend themselves legally against this concerning trend.

Just a few months ago, we reported on the sad death of 16-year-old Adam Raine and his parents' lawsuit of OpenAI.

OpenAI has filed an official response to the lawsuit, and here's their defense:

The [Terms of Use] provides that ChatGPT users must comply with OpenAI's Usage Policies, which prohibit the use of ChatGPT for 'suicide' or 'self-harm.'

Yep, according to OpenAI, it's not their fault ChatGPT convinced a 16-year-old kid to kill himself because killing yourself is against their Terms of Use.

And then they started hitting below the belt.

Under the TOU, users under 18 years of age are forbidden from using ChatGPT without the consent of a parent or guardian.

This second layer of defense essentially blames the kid's parents for their son using the chatbot, which convinced him to kill himself.

But without the company using age-verification it would be hard to prove the parents were liable, but that didn't stop OpenAI's lawyers from digging into that defense, quoting the logs of Raine's chat:

Adam Raine told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.

The most solid defense ChatGPT's lawyers offered regards Raine's medication:

In the weeks and months before his death, Adam Raine told ChatGPT that he was taking increasing doses of a medication, which he stated worsened his depression and made him suicidal.

That medication has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.

If that is truly the case and not some AI hallucination, the family is going to have an uphill battle convincing a judge and jury that ChatGPT is entirely to blame here.

They might have to try to sue a pharmaceutical company too.


P.S. Now check out our latest video 👇

Keep up with our latest videos — Subscribe to our YouTube channel!