The MTA Speaks| Prayer times| Weather Update| Gold Price
Follow Us: Facebook Instagram YouTube twitter

OpenAI Adds Parental Controls for Teen ChatGPT Safety

OpenAI Adds Parental Controls for Teen ChatGPT Safety

Post by : Mariam Al-Faris

OpenAI, the company behind ChatGPT, has announced new safety measures for teenagers using its artificial intelligence tool. The decision came after strong concerns about teen safety online. The company said it will soon allow parents to link their accounts with their teens’ ChatGPT accounts. This will give parents better control and allow them to monitor how the tool is being used. The goal is to make the experience safer and to protect young people from possible harm when using artificial intelligence.

Why This Step Was Taken

Recently, serious accusations were made against OpenAI. Some reports claimed that the company’s AI tool had encouraged a teenager in the United States to take their own life. These claims raised alarm among parents, experts, and the public. OpenAI strongly denied that its systems are designed to encourage harmful behavior, but the company also recognized the importance of improving safety. By introducing new parental controls, OpenAI wants to show it is committed to protecting teens from emotional risks while using ChatGPT.

Parental Account Linking

The new system will allow parents to link their personal accounts with their teenager’s ChatGPT account. Once this feature is active, parents will be able to receive alerts about their child’s usage. If the AI system detects signs of severe distress or harmful thoughts in a teen’s conversation, parents will be notified. This connection will give families the ability to respond early and offer support. Parents will also have the ability to adjust account settings, manage permissions, and set rules for how the AI responds to sensitive topics.

Alerts for Severe Distress

One of the most important features of the new mechanism is the alert system. OpenAI explained that its models will be trained to better recognize when a conversation may show signs of psychological distress. For example, if a teenager writes messages about feeling hopeless or thinking about self-harm, the system will alert parents immediately. This is designed to give families a chance to step in before a situation becomes worse. The system will not replace professional mental health support, but it can act as an early warning tool for parents.

Focus on Improving AI Recognition

OpenAI also stated that it is working to make ChatGPT models more effective at recognizing mental health signals. AI cannot understand emotions the same way humans do, but it can be trained to spot patterns in language that suggest sadness, anxiety, or risk. Over the next few months, the company plans to strengthen its AI safety systems. The goal is to make sure the tool reacts in a responsible way when faced with sensitive topics. For example, instead of providing harmful responses, the model will guide users toward safe and supportive information.

Safer Models for Sensitive Topics

Another important measure is the redirection of certain conversations. If a user starts a very sensitive conversation that involves mental health or emotional distress, OpenAI plans to move that conversation to more advanced and secure models. These models, such as GPT-5 Thinking, are designed to follow safety rules more strictly and respond in a more careful, structured way. This ensures that teenagers are not given harmful or careless answers during vulnerable moments. It also ensures the tool is more consistent in following safety guidelines.

Timeline for Implementation

OpenAI announced that these new features will begin rolling out starting next month. Over the following 120 days, the company will introduce more updates and improvements step by step. Parents can expect regular progress, with better safety tools being added gradually. This careful approach allows the company to test the system, fix problems, and ensure it works properly before reaching all users. OpenAI has promised transparency in this process, explaining changes and updates to the public as they happen.

Balancing Safety and Privacy

While the new parental controls focus on safety, the company also understands the need to respect teenagers’ privacy. OpenAI explained that parents will have monitoring powers, but the system is being designed to avoid unnecessary invasions of privacy. The alerts will only be sent in cases of severe distress or dangerous signs, rather than for everyday conversations. This balance is important because young people need to feel safe and free when using technology, but at the same time, their well-being must be protected.

Commitment to Responsible AI

This announcement reflects OpenAI’s larger commitment to responsible artificial intelligence. The company has often said that AI tools should be safe, fair, and useful for everyone. By creating parental controls, OpenAI is showing that it takes its responsibilities seriously, especially when dealing with young users. The company believes AI can be a positive force in education, creativity, and communication, but only if safety is treated as a top priority. These new steps are part of that ongoing mission.

Community Reactions and Expectations

Many parents and safety advocates welcomed this decision. They see it as a positive move toward making AI safer for families. Some experts also said it is important for other technology companies to follow similar steps, since many young people now use AI tools daily. However, others pointed out that technology alone cannot solve all problems. Families, schools, and communities must also play a role in guiding young people and providing emotional support. The announcement sparked a larger conversation about how society should use AI responsibly in the future.

The launch of parental controls in ChatGPT marks an important step for OpenAI in addressing public concerns about teen safety. By allowing parents to link accounts, receive alerts for distress, and manage account settings, the company is creating a safer environment for young users. The redirection of sensitive conversations to more advanced models like GPT-5 Thinking adds another layer of protection. Over the next 120 days, these improvements will be introduced gradually, showing OpenAI’s commitment to building trust with families. While technology cannot replace human care and support, these measures will help parents stay more involved in their teenagers’ digital lives and ensure that AI is used in a safe, healthy, and responsible way.

Sept. 3, 2025 10:49 a.m. 553

More Trending News

Featured Stories

Russia Fires 400 Drones, 30+ Missiles, 4 Killed in Ukraine
March 24, 2026 5:03 p.m.
Russia fires nearly 400 drones, 30+ missiles on Ukraine, killing 4 and wounding 27, as a potential spring offensive begins on the eastern front
Read More
ne’ma’s Ramadan 2026 Drive Rescues 300,000 Tonnes Food
March 24, 2026 4:15 p.m.
ne’ma’s Ramadan 2026 campaign rescued over 300,000 tonnes of food, helping 10,000 families across UAE while promoting community volunteering
Read More
Israel Pushes Border Plan as Lebanon Strikes Rise
March 24, 2026 3:04 p.m.
Israel escalates strikes in Lebanon as minister calls for border shift to Litani River, raising tensions and fears of wider regional conflict
Read More
Tonga Earthquake 7.6 Magnitude, No Tsunami Threat
March 24, 2026 1:19 p.m.
A powerful 7.6 magnitude earthquake struck near Tonga, but no tsunami threat was issued due to its deep location, say officials
Read More
Bill Cosby Liable in 1972 Assault Case Jury Awards $59M
March 24, 2026 12:44 p.m.
A California jury found Bill Cosby liable for a 1972 assault case and awarded $59.25 million to Donna Motsinger including damages for trauma
Read More
Dubai opens scholarship applications for 2026
March 24, 2026 11:21 a.m.
Dubai invites Emirati students to apply for Hamdan bin Mohammed Scholarship 2026-27 with full funding and global study access
Read More
Trump shifts Iran war blame to Hegseth
March 24, 2026 11:08 a.m.
As Iran war enters week four, Donald Trump points to Defence Secretary Pete Hegseth while conflicting claims deepen uncertainty over US actions
Read More
ADNOC L&S Receives New LNG Carrier “Arada” Early
March 23, 2026 5:11 p.m.
ADNOC Logistics & Services confirms early delivery of its fifth LNG carrier, Arada, built by Jiangnan Shipyard, now ready for operations
Read More
Supreme Court Reviews Late Mail-In Ballot Rules
March 23, 2026 4:52 p.m.
Supreme Court hears Mississippi case on counting late mail-in ballots. Ruling could affect 14 states and DC in the 2026 midterm elections
Read More
Sponsored