Post by : Mariam Al-Faris
OpenAI, the company behind ChatGPT, has announced new safety measures for teenagers using its artificial intelligence tool. The decision came after strong concerns about teen safety online. The company said it will soon allow parents to link their accounts with their teens’ ChatGPT accounts. This will give parents better control and allow them to monitor how the tool is being used. The goal is to make the experience safer and to protect young people from possible harm when using artificial intelligence.
Why This Step Was Taken
Recently, serious accusations were made against OpenAI. Some reports claimed that the company’s AI tool had encouraged a teenager in the United States to take their own life. These claims raised alarm among parents, experts, and the public. OpenAI strongly denied that its systems are designed to encourage harmful behavior, but the company also recognized the importance of improving safety. By introducing new parental controls, OpenAI wants to show it is committed to protecting teens from emotional risks while using ChatGPT.
Parental Account Linking
The new system will allow parents to link their personal accounts with their teenager’s ChatGPT account. Once this feature is active, parents will be able to receive alerts about their child’s usage. If the AI system detects signs of severe distress or harmful thoughts in a teen’s conversation, parents will be notified. This connection will give families the ability to respond early and offer support. Parents will also have the ability to adjust account settings, manage permissions, and set rules for how the AI responds to sensitive topics.
Alerts for Severe Distress
One of the most important features of the new mechanism is the alert system. OpenAI explained that its models will be trained to better recognize when a conversation may show signs of psychological distress. For example, if a teenager writes messages about feeling hopeless or thinking about self-harm, the system will alert parents immediately. This is designed to give families a chance to step in before a situation becomes worse. The system will not replace professional mental health support, but it can act as an early warning tool for parents.
Focus on Improving AI Recognition
OpenAI also stated that it is working to make ChatGPT models more effective at recognizing mental health signals. AI cannot understand emotions the same way humans do, but it can be trained to spot patterns in language that suggest sadness, anxiety, or risk. Over the next few months, the company plans to strengthen its AI safety systems. The goal is to make sure the tool reacts in a responsible way when faced with sensitive topics. For example, instead of providing harmful responses, the model will guide users toward safe and supportive information.
Safer Models for Sensitive Topics
Another important measure is the redirection of certain conversations. If a user starts a very sensitive conversation that involves mental health or emotional distress, OpenAI plans to move that conversation to more advanced and secure models. These models, such as GPT-5 Thinking, are designed to follow safety rules more strictly and respond in a more careful, structured way. This ensures that teenagers are not given harmful or careless answers during vulnerable moments. It also ensures the tool is more consistent in following safety guidelines.
Timeline for Implementation
OpenAI announced that these new features will begin rolling out starting next month. Over the following 120 days, the company will introduce more updates and improvements step by step. Parents can expect regular progress, with better safety tools being added gradually. This careful approach allows the company to test the system, fix problems, and ensure it works properly before reaching all users. OpenAI has promised transparency in this process, explaining changes and updates to the public as they happen.
Balancing Safety and Privacy
While the new parental controls focus on safety, the company also understands the need to respect teenagers’ privacy. OpenAI explained that parents will have monitoring powers, but the system is being designed to avoid unnecessary invasions of privacy. The alerts will only be sent in cases of severe distress or dangerous signs, rather than for everyday conversations. This balance is important because young people need to feel safe and free when using technology, but at the same time, their well-being must be protected.
Commitment to Responsible AI
This announcement reflects OpenAI’s larger commitment to responsible artificial intelligence. The company has often said that AI tools should be safe, fair, and useful for everyone. By creating parental controls, OpenAI is showing that it takes its responsibilities seriously, especially when dealing with young users. The company believes AI can be a positive force in education, creativity, and communication, but only if safety is treated as a top priority. These new steps are part of that ongoing mission.
Community Reactions and Expectations
Many parents and safety advocates welcomed this decision. They see it as a positive move toward making AI safer for families. Some experts also said it is important for other technology companies to follow similar steps, since many young people now use AI tools daily. However, others pointed out that technology alone cannot solve all problems. Families, schools, and communities must also play a role in guiding young people and providing emotional support. The announcement sparked a larger conversation about how society should use AI responsibly in the future.
The launch of parental controls in ChatGPT marks an important step for OpenAI in addressing public concerns about teen safety. By allowing parents to link accounts, receive alerts for distress, and manage account settings, the company is creating a safer environment for young users. The redirection of sensitive conversations to more advanced models like GPT-5 Thinking adds another layer of protection. Over the next 120 days, these improvements will be introduced gradually, showing OpenAI’s commitment to building trust with families. While technology cannot replace human care and support, these measures will help parents stay more involved in their teenagers’ digital lives and ensure that AI is used in a safe, healthy, and responsible way.
Trump shifts Iran war blame to Hegseth
As Iran war enters week four, Donald Trump points to Defence Secretary Pete Hegseth while conflictin
Shawwal Crescent Moon Visible in Saudi Arabia
Saudi Arabia’s Supreme Court urges public to spot Shawwal crescent tonight, marking the start of Eid
Iran Strikes UAE 167 Missiles 541 Drones Hit Dubai
Iran launches large-scale missile and drone assault on UAE forcing airport shutdowns and triggering
UAE Rejects Sudan Conflict Allegations at UN Human Rights Council
Emirati diplomat issues Right of Reply in Geneva dismissing accusations and urging accountability fo
NCM issues fog and low visibility warning in UAE
National Centre of Meteorology warns of fog and reduced visibility in coastal and internal areas, ur
UAE expresses full solidarity with Kuwait over maritime rights
UAE expresses full solidarity with Kuwait and urges Iraq to resolve maritime concerns through intern