OpenAI plans to launch “parental controls” for its ChatGPT AI chatbot over the coming month, as concerns mount regarding the chatbot’s performance in mental health-related discussions, particularly involving younger users.
The company, which unveiled the new feature in a on Tuesday, stated it is improving its models’ capacity to detect and react to indicators of mental and emotional distress.
OpenAI is set to roll out a new function enabling parents to connect their accounts with their child’s via an email invitation. Parents will gain the ability to manage the chatbot’s responses to prompts and will be notified if the chatbot identifies that their child is experiencing a “moment of acute distress,” according to the company. Furthermore, this rollout will allow parents to “manage which features to disable, including memory and chat history.”
OpenAI had previously stated it was allowing teenagers to add a trusted emergency contact to their account. However, the company did not specify concrete plans for implementing such a safeguard in its latest blog post.
“These initial measures are just the start. We will persist in learning and enhancing our methodology, informed by experts, with the aim of rendering ChatGPT as beneficial as possible,” the company stated.
This announcement follows a week after the parents of a teenage boy who died by suicide OpenAI, claiming its ChatGPT assisted their son Adam in “exploring suicide methods.” TIME sought comment from OpenAI regarding the lawsuit. (OpenAI’s announcement about parental controls did not directly address the legal action.)
“ChatGPT was operating precisely as intended: to perpetually endorse and confirm everything Adam articulated, including his most detrimental and self-destructive ideas,” the lawsuit asserted. “ChatGPT drew Adam further into a bleak and despairing state by reassuring him that ‘many individuals who contend with anxiety or intrusive thoughts discover comfort in envisioning an ‘escape hatch’ as it can feel like a means to regain control.’”
At least one other parent has initiated a comparable lawsuit against a separate artificial intelligence firm, , contending that the company’s chatbot companions incited their 14-year-old son’s suicide.
In response to the lawsuit last year, a spokesperson for Character.AI stated they were “heartbroken by the tragic loss” of one of its users and extended their “deepest condolences” to the family.
“As an organization, we prioritize the safety of our users with utmost seriousness,” the spokesperson remarked, further noting that the company had been putting new safety protocols into effect.
Character.AI currently offers a parental insights feature, which enables parents to view a of their child’s activity on the platform, provided their teenager sends them an email invitation.
Other companies offering AI chatbots, such as Google AI, already possess parental controls. “As a parent, you can manage your child’s Gemini settings, including activating or deactivating it, using Google Family Link,” states from Google, aimed at parents seeking to oversee their child’s access to Gemini Apps. Meta recently that it would prevent its chatbots from participating in discussions concerning suicide, self-harm, and disordered eating, following a Reuters on an internal policy document containing troubling details.
A recent published in the medical journal Psychiatric Services, which examined the responses of three chatbots—OpenAI’s ChatGPT, Google AI’s Gemini, and Anthropic’s Claude—discovered that some of them provided answers to what researchers categorized as questions presenting “intermediate levels of risk” pertaining to suicide.
OpenAI already has certain safeguards implemented. The California-based company informed the New York Times, in response to the late August lawsuit, that its chatbot provides crisis helplines and directs users to real-world support. Yet, they also acknowledged some system deficiencies. “While these protections function optimally in typical, brief exchanges, we have observed over time that they can occasionally diminish in reliability during lengthy interactions where components of the model’s safety training might deteriorate,” the company remarked.
In its announcement detailing the forthcoming introduction of parental controls, OpenAI also disclosed intentions to direct sensitive inquiries to a version of their chatbot designed to spend more time reasoning and analyzing context prior to generating responses to prompts.
OpenAI has affirmed its commitment to continually update on its progress over the subsequent 120 days and is working with a panel of experts specializing in youth development, mental health, and human-computer interaction to enhance and mold how AI can provide support during moments of vulnerability.
If you or someone you know might be grappling with a mental health crisis or considering suicide, please call or text 988. For emergencies, dial 911, or seek care from a local hospital or mental health professional.