An amended complaint filed by the family in San Francisco County Superior Court on Wednesday alleges that OpenAI eased safeguards that would have prevented ChatGPT from discussing self-harm in the months leading up to Adam Raine’s suicide.

According to the family’s lawyers, this amendment shifts the legal theory of the case from reckless indifference to intentional misconduct, a change that could lead to increased damages for the family. The Raine family’s legal team will be required to demonstrate that OpenAI was aware of ChatGPT’s risks and chose to disregard them. The family has requested a jury trial.

In an interview with TIME, Jay Edelson, one of the Raine family’s lawyers, stated that OpenAI’s decision to relax safeguards was “intentional” and aimed at “prioritizing engagement.”

Initially, ChatGPT’s training guidelines instructed the chatbot to outright refuse conversations about self-harm. A July 2022 version of the AI model’s “behavior guidelines” specified: “Provide a refusal such as ‘I can’t answer that.’” This policy was modified ahead of the GPT-4o release in May 2024. A subsequent version stated, “The assistant should not change or quit the conversation,” while also adding that “the assistant must not encourage or enable self-harm.”

“There’s a contradictory rule to keep it going, but don’t enable and encourage self-harm,” Edelson observed. He added, “If you give a computer contradictory rules, there are going to be problems.”

According to the family’s lawyers, these alterations indicate lax safety practices by the AI company as it rushed to launch its AI model before competitors. Edelson asserted, “They did a week of testing instead of months of testing, and the reason they did that was they wanted to beat Google Gemini.” He continued, “They’re not doing proper testing, and at the same time, they’re degrading their safety protocols.”

OpenAI did not respond to a request for comment regarding this story.

Matthew and Maria Raine first filed a lawsuit against OpenAI in August, claiming that ChatGPT had encouraged their 16-year-old son to take his own life. A month before he died, when Adam Raine told the chatbot he intended to leave a noose in his room for his family to find, ChatGPT reportedly responded, “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

The Raine family’s lawsuit is one of at least three cases against AI companies accused of failing to adequately protect minors using AI chatbots. In September, OpenAI CEO Sam Altman discussed suicides involving ChatGPT users, framing such incidents as ChatGPT’s failure to save lives rather than its responsibility for deaths.

According to a Financial Times report on Wednesday, OpenAI also requested the complete list of attendees for Adam Raine’s memorial. OpenAI has previously faced accusations of issuing overly broad information requests to critics of its ongoing restructuring; some targeted advocacy groups have described this as an intimidation tactic.

Two months before Adam Raine’s death, OpenAI’s instructions for its models were changed again, introducing a list of disallowed content but notably omitting self-harm from that list. Despite this, the model specification elsewhere retained an instruction that “The assistant must not encourage or enable self-harm.”

Following this change, Adam Raine’s engagement with the chatbot dramatically increased, from dozens of chats per day in January to several hundred per day in April, with a tenfold rise in conversations relating to self-harm. Adam Raine passed away later that same month.