OpenAI introduces parental controls for ChatGPT amid growing concerns

0
23

OpenAI has announced plans to introduce parental controls for its popular AI chatbot, ChatGPT, in response to growing concerns about the impact of artificial intelligence on young people’s mental health.

The new features will allow parents to link their accounts with their children’s, disable certain features such as memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behavior rules”.

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” OpenAI said.

The company will also receive expert input in implementing the feature to “support trust between parents and teens”. Parents will also be able to receive notifications when their teen shows signs of distress.

The announcement comes after a California couple filed a lawsuit accusing OpenAI of responsibility in the suicide of their 16-year-old son.

Matt and Maria Raine allege that ChatGPT validated their son Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable result of deliberate design choices”. Jay Edelson, a lawyer representing the Raine family, dismissed OpenAI’s planned changes as an attempt to “shift the debate”.

“They say that the product should just be more sensitive to people in crisis, be more ‘helpful’, show a bit more ’empathy’, and the experts are going to figure that out,” Edelson said. “We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide”.

The use of AI models by people experiencing severe mental distress has been the focus of growing concern amid their widespread adoption as a substitute therapist or friend.

Researchers have found that ChatGPT and other AI chatbots can follow clinical best practices when answering high-risk questions about suicide but are inconsistent when responding to queries posing “intermediate levels of risk”.

“These findings suggest a need for further refinement to ensure that LLMs can be safely and effectively used for dispensing mental health information, especially in high-stakes scenarios involving suicidal ideation,” the authors said.

Hamilton Morrin, a psychiatrist at King’s College London, welcomed OpenAI’s decision to introduce parental controls but emphasized that they should be seen as just one part of a wider set of safeguards.

“Broadly, I would say that the tech industry’s response to mental health risks has often been reactive rather than proactive,” Morrin said.

The parental controls will include:

  • Account Linking: Parents can link their accounts with their teen’s account to monitor and manage their activity.
  • Feature Restrictions: Parents can disable features like memory and chat history to reduce unhealthy attachment or delusional thinking.
  • Age-Appropriate Rules: Parents can set default age-appropriate rules for model behavior to ensure the chatbot responds appropriately to their teen’s queries.
  • Notifications: Parents can receive notifications when the system detects signs of acute distress in their teen’s conversations.

OpenAI plans to roll out the parental controls within the next month. The company is working with experts in youth development, mental health, and human-computer interaction to develop future safeguards.

OpenAI will use more reliable AI models like GPT-5 for sensitive chats and will continue to learn and strengthen its approach over the coming 120 days.

Leave a Reply