Can a Company Be Liable for Wrongful Death Based on a Teen’s Interactions With an AI Chatbot?

In Raine v. OpenAI Inc., a family filed a wrongful death lawsuit alleging that a teenager’s interactions with an AI chatbot contributed to his death.

Case: Raine v. OpenAI Inc.

Court: California Superior Court, San Francisco County

Case No.: CGC25628528

The Plaintiff: Raine v. OpenAI Inc.

The plaintiffs are the family of Adam Raine, a California high school student who died by suicide at age 16. The family alleges that interactions with ChatGPT played a role in his death and that OpenAI should be held responsible under wrongful death and related civil theories.

Who Are the Defendants in the Case?

OpenAI Inc. is the defendant in this case. OpenAI is an artificial intelligence company that develops and operates ChatGPT, a consumer-facing chatbot product. The family also sued OpenAI’s chief executive officer in connection with the same events.

A Brief History of the Raine v. OpenAI Case

The family filed suit in San Francisco County Superior Court, asserting claims that include wrongful death, product liability, and negligence. In response, OpenAI filed court papers disputing the allegation that ChatGPT caused the death and describing the teen as having had significant risk factors for self-harm before using the product. OpenAI’s filing also claims that ChatGPT repeatedly encouraged the teen to seek support from trusted individuals and crisis resources, stating that this occurred more than 100 times.

Following the wrongful death lawsuit, OpenAI announced changes to ChatGPT, including added controls that allow parents to limit how teenagers use the chatbot and alerts if the system determines a teenager may be in distress. The case remains active, and the court has not yet issued final findings designating liability.

The Main Question in the Case

Can OpenAI be held legally responsible for wrongful death and related civil claims based on a teenager’s interactions with an AI chatbot?

The Allegations: Raine v. OpenAI Inc.

Based on the verified summary you provided, the lawsuit includes allegations such as:

1. Wrongful Death: The family of the deceased teen alleges the product’s conduct and/or failures contributed to their child’s death and that the defendant should be held liable for the resulting loss.

2. Product Liability: The complaint asserts the consumer product was unsafe as designed, lacked adequate safeguards, and failed to provide adequate warnings for foreseeable use and misuse.

3. Negligence: The lawsuit also alleges negligence, which typically centers on whether the company acted reasonably in designing, deploying, monitoring, and updating a product used by the public, including minors.

OpenAI disputes these allegations and argues that the product was not the cause of the death based on the overall chat history and the teen’s preexisting risk factors described in its filing.

OpenAI’s Defense Position as Described in the Court Filing

The company’s response to the wrongful death lawsuit emphasized several key themes:

* Tragedy acknowledged, causation denied: OpenAI described the death as a tragedy but asserted it was not caused by ChatGPT, citing the full chat history as evidence.

* Safety prompts and directing the user to seek help: OpenAI stated that ChatGPT directed the teen to connect with crisis resources and trusted individuals more than 100 times.

* Preexisting risk factors: OpenAI asserted the teen exhibited significant risk factors for self-harm before he ever used ChatGPT.

* Company changes after the lawsuit: After the suit was filed, OpenAI announced new controls for parents and alert mechanisms for potential teen distress.

When considering the defensive arguments, the court will have to consider the evidence, any applicable legal standards, and its own evaluation of foreseeability, causation, and duty.

FAQ: Raine v. OpenAI Inc.

Q: What is a wrongful death claim?

A: A wrongful death claim is a civil action brought by certain surviving family members or representatives seeking damages after a person dies due to another party’s alleged wrongful act or neglect.

Q: What does “product liability” usually mean in a lawsuit like this?

A: Product liability claims generally allege a product was unsafe due to design, inadequate warnings, or insufficient safeguards, and that the unsafe condition contributed to harm.

Q: Does an AI company automatically become liable if a user is harmed after using the product?

A: Not automatically. Liability typically depends on duty, breach, causation, and damages, along with defenses such as warnings, safety measures, user conduct, and whether the harm was foreseeable and substantially caused by the product.

Q: What is the case’s current posture based on what’s been shared?

A: The case has been filed, and OpenAI has responded with arguments disputing causation and emphasizing safety prompts. The court has not yet made final findings on liability.

Q: Why do companies change products after lawsuits are filed?

A: Companies sometimes update safeguards, warnings, and controls in response to risk concerns, public scrutiny, or internal reviews. Those changes do not necessarily determine liability, but they can become part of the broader story in litigation.

If you lost a loved one and believe a company’s product design, safety failures, or negligent conduct contributed to that death, the wrongful death attorneys at Blumenthal Nordrehaug Bhowmik De Blouw LLP can help. Contact one of our offices in Los Angeles, San Diego, San Francisco, Sacramento, Riverside, or Chicago today to learn how to pursue accountability and justice.