ChatGPT to Use GPT-5 for Sensitive Chats, With New Parental Controls

ChatGPT to Use GPT-5 for Sensitive Chats

The safety of artificial intelligence came under renewed scrutiny after the death of 16-year-old Adam Raine, a teenager from California whose parents claim that ChatGPT played a role in his decision to take his own life. According to court filings, Raine used ChatGPT in the weeks before his death, allegedly receiving detailed responses when he asked about methods of self-harm and suicide. His family has since filed a wrongful death lawsuit against OpenAI, accusing the company of negligence and of failing to stop the chatbot from providing dangerous information during moments of severe vulnerability.

The lawsuit is the first of its kind to reach this scale in the United States, and it has accelerated public debate over how much responsibility AI developers must take for safeguarding users, especially teenagers and people in emotional distress.

Shifting Conversations to Safer AI Models

In response, OpenAI has announced one of its most significant safety reforms to date. Sensitive conversations—particularly those involving signs of depression, self-harm, or suicidal thoughts—will now be redirected to more advanced reasoning models such as GPT-5-thinking. These specialized systems are engineered to process context more carefully and deliberately, taking extra time to evaluate the emotional state of the user before generating a response.

By routing such conversations to these models regardless of which version of ChatGPT the user originally selected, OpenAI hopes to reduce the risk of harmful or reckless replies. The company stressed that its goal is to make ChatGPT safer and more reliable during prolonged conversations where emotional well-being is at stake.

New Parental Control Tools

Another major reform centers on how teenagers access ChatGPT. OpenAI will soon release parental control features that allow parents and guardians to link their accounts with their child’s. Once connected, parents will be able to:

  • Set age-appropriate rules for ChatGPT’s responses.

  • Control access to sensitive features like chat history and memory.

  • Receive alerts if the system detects signs that the teenager is in acute emotional distress.

These alerts are being designed with input from mental health professionals to balance privacy and trust, ensuring parents can intervene when necessary without undermining the child’s autonomy. OpenAI emphasized that these measures aim to build a partnership between families and the AI system, especially as teens increasingly use chatbots for study help, daily conversations, or emotional support.

Expert Oversight and Medical Input

Expert Oversight and Medical Input

To design these features responsibly, OpenAI is working closely with an Expert Council on Well-Being and AI and a Global Physician Network. These advisory groups include psychologists, psychiatrists, pediatricians, and ethicists who provide both specialized medical knowledge and a broader global perspective. Their role is to guide the development of parental alerts, ensure responses follow mental health best practices, and establish clear criteria for detecting risk behaviors.

This type of oversight reflects a growing industry trend: AI companies are under pressure not only to create innovative tools but also to prove that they are being developed with real-world safety standards in mind.

Broader Industry Reckoning

OpenAI is not the only company facing criticism. In recent months, several tragic incidents—including suicides and even a murder-suicide in the U.S. involving another chatbot—have pushed governments, advocacy groups, and industry watchdogs to demand greater accountability from AI developers.

Meta has recently added restrictions to prevent its chatbots from engaging in conversations about self-harm, eating disorders, or sexual exploitation when interacting with younger users. Microsoft and Google are also under review for how their AI systems handle requests related to health and emotional crises.

A RAND Corporation study conducted in mid-2025 highlighted the unevenness of chatbot responses to suicide-related queries, showing that some AI systems offered helpful prevention resources while others failed to respond appropriately. This inconsistency reinforced the urgent need for universal safety benchmarks across the AI sector.

OpenAI’s 120-Day Roadmap

OpenAI confirmed that these new measures are only the beginning of a long-term effort. Within the next 120 days, the company plans to:

  • Roll out advanced reasoning routing across all sensitive categories.

  • Launch the first version of parental controls, including distress alerts.

  • Expand partnerships with health organizations and academic researchers to track outcomes.

  • Explore emergency integration features, such as one-click access to crisis hotlines or trusted contacts.

The company has acknowledged that these systems will need continuous testing and refinement, especially as new risks emerge. Its stated aim is to make ChatGPT not only a useful tool for productivity and education but also a trusted, responsible companion in moments of emotional vulnerability.

Balancing Innovation and Responsibility

The challenge OpenAI faces is one of balance. On the one hand, ChatGPT is used by hundreds of millions worldwide for tasks ranging from writing assistance to tutoring. On the other, the chatbot’s availability around the clock, and its conversational style, make it a potential outlet for those in crisis. Without safeguards, the risk of misuse or harmful reinforcement grows.

OpenAI’s strategy to introduce slower, more deliberate reasoning models for high-risk scenarios represents a shift away from prioritizing speed and convenience, instead placing safety at the center. Analysts believe this may set a precedent for the broader AI industry, which has often been criticized for rolling out products faster than regulators and ethicists can keep up.

The death of Adam Raine has become a turning point in how AI safety is being discussed worldwide. For OpenAI, it is both a moment of reckoning and an opportunity to redefine how technology companies respond to crises tied to their platforms. The upcoming measures—including advanced reasoning models, parental controls, distress alerts, and expert oversight—mark a significant step forward.

Whether these reforms will be enough remains to be seen, but one fact is clear: AI systems like ChatGPT are no longer just productivity tools; they are increasingly entangled with deeply human and emotional aspects of daily life. How responsibly these technologies evolve will shape not only the future of AI but also the trust society places in them.


Subscribe to Our Newsletter

Related Articles

Top Trending

Tokenizing the World: The Rise of Real World Assets (RWA) in 2026
Tokenizing the World: The Rise of Real World Assets (RWA) in 2026
Lab Grown Eel
Lab-Grown Eel: Japanese Food Tech Breakthrough Hits Sushi Markets
Leading in the Age of Agents How to Manage Digital Employees
Leading in the Age of Agents: How to Manage Digital Employees
UK Sovereign AI Compute
UK’s “Sovereign AI” Push: Sunak Pledges £500M for Public Sector Compute
Dhaka Fintech Seed Funding
Dhaka’s Startup Ecosystem: 3 Fintechs Securing Seed Funding in January

LIFESTYLE

Travel Sustainably Without Spending Extra featured image
How Can You Travel Sustainably Without Spending Extra? Save On Your Next Trip!
Benefits of Living in an Eco-Friendly Community featured image
Go Green Together: 12 Benefits of Living in an Eco-Friendly Community!
Happy new year 2026 global celebration
Happy New Year 2026: Celebrate Around the World With Global Traditions
dubai beach day itinerary
From Sunrise Yoga to Sunset Cocktails: The Perfect Beach Day Itinerary – Your Step-by-Step Guide to a Day by the Water
Ford F-150 Vs Ram 1500 Vs Chevy Silverado
The "Big 3" Battle: 10 Key Differences Between the Ford F-150, Ram 1500, and Chevy Silverado

Entertainment

Netflix Vs. Disney+ Vs. Max- who cancelled more shows in 2025
Netflix Vs. Disney+ Vs. Max: Who Cancelled More Shows In 2025?
global Netflix cancellations 2026
The Global Axe: Korean, European, and Latin American Netflix Shows Cancelled in 2026
why Netflix removes original movies featured image
Deleted Forever? Why Netflix Removes Original Movies And Where The “Tax Break” Theory Comes From
can fans save a Netflix show featured image
Can Fans Save A Netflix Show? The Real History Of Petitions, Pickups, And Comebacks
Netflix shows returning in 2026 featured image
Safe For Now: Netflix Shows Returning In 2026 That Are Officially Confirmed

GAMING

The Death of the Console Generation Why 2026 is the Year of Ecosystems
The Death of the Console Generation: Why 2026 is the Year of Ecosystems
Pocketpair Aetheria
“Palworld” Devs Announce New Open-World Survival RPG “Aetheria”
Styx Blades of Greed
The Goblin Goes Open World: How Styx: Blades of Greed is Reinventing the AA Stealth Genre.
Resident Evil Requiem Switch 2
Resident Evil Requiem: First Look at "Open City" Gameplay on Switch 2
High-performance gaming setup with clear monitor display and low-latency peripherals. n Improve Your Gaming Performance Instantly
Improve Your Gaming Performance Instantly: 10 Fast Fixes That Actually Work

BUSINESS

Leading in the Age of Agents How to Manage Digital Employees
Leading in the Age of Agents: How to Manage Digital Employees
Dhaka Fintech Seed Funding
Dhaka’s Startup Ecosystem: 3 Fintechs Securing Seed Funding in January
Quiet Hiring Trend
The “Quiet Hiring” Trend: Why Companies Are Promoting Internally Instead of Hiring in Q1
Pharmaceutical Consulting Strategies for Streamlining Drug Development Pipelines
Pharmaceutical Consulting: Strategies for Streamlining Drug Development Pipelines
IMF 2026 Outlook Stable But Fragile
Global Economic Outlook: IMF Predicts 3.1% Growth but "Downside Risks" Remain

TECHNOLOGY

UK Sovereign AI Compute
UK’s “Sovereign AI” Push: Sunak Pledges £500M for Public Sector Compute
Netflix shows returning in 2026 featured image
Safe For Now: Netflix Shows Returning In 2026 That Are Officially Confirmed
Grok AI Liability Shift
The Liability Shift: Why Global Probes into Grok AI Mark the End of 'Unfiltered' Generative Tech
GPT 5 Store leaks
OpenAI’s “GPT-5 Store” Leaks: Paid Agents for Legal and Medical Advice?
Pocketpair Aetheria
“Palworld” Devs Announce New Open-World Survival RPG “Aetheria”

HEALTH

Apple Watch Anxiety Vs Arrhythmia
Anxiety or Arrhythmia? The New Apple Watch X Algorithm Knows the Difference
Polylaminin Breakthrough
Polylaminin Breakthrough: Can This Brazilian Discovery Finally Reverse Spinal Cord Injury?
Bio Wearables For Stress
Post-Holiday Wellness: The Rise of "Bio-Wearables" for Stress
ChatGPT Health Medical Records
Beyond the Chatbot: Why OpenAI’s Entry into Medical Records is the Ultimate Test of Public Trust in the AI Era
A health worker registers an elderly patient using a laptop at a rural health clinic in Africa
Digital Health Sovereignty: The 2026 Push for National Digital Health Records in Rural Economies