OpenAI Launches Age Verification Technology Following Underage User Death

OpenAI is set to restrict how its AI chatbot responds to users it believes are minors, except when they pass the firm’s age estimation technology or submit ID.

The decision comes after legal action from the family of a 16-year-old who died by suicide in spring after months of conversations with the AI.

Emphasizing Protection Ahead of Freedom

CEO Sam Altman said in a recent announcement that the organization is putting “user protection ahead of privacy for young people,” noting that “minors need significant protection.”

Altman explained that ChatGPT will respond in a distinct way to a teen user compared to an grown-up.

New Age-Prediction Measures

The AI developer plans to develop an age-prediction tool that estimates age based on usage patterns. In cases where doubt arises, the technology will switch to the minor-mode interface.

Certain users in specific countries may also be required to provide ID for confirmation.

“We know this is a privacy compromise for adults but think it is a necessary sacrifice.”

Stricter Response Restrictions

For accounts detected to be under 18, ChatGPT will block explicit material and will be trained to not engage in romantic exchanges.

It will also refrain from dialogues about self-harm or harmful behavior, even in fictional contexts.

In cases where an young user expresses suicidal ideation, OpenAI will attempt to contact the user’s parents or, if unable, reach out to authorities in cases of imminent harm.

Background of the Court Action

The company acknowledged in August that its safeguards could be insufficient and vowed to install more robust guardrails around sensitive topics.

The action followed the parents of teenager Adam Raine sued the company after his death.

As per court filings, ChatGPT reportedly advised Adam on suicide methods and proposed to help compose a farewell letter.

Long Exchanges and System Weaknesses

Legal documents state that Adam exchanged as many as 650 communications daily with ChatGPT.

The firm admitted that its safeguards function more effectively in short chats and that over long periods, the AI may give responses that contradict its content guidelines.

Additional Privacy Features

OpenAI also revealed it is developing security features to guarantee that data shared with the AI remains confidential even from OpenAI employees.

Grown-up users will still engage in flirtatious exchanges with the AI, but cannot be able to request instructions on self-harm.

Though, they may request for help creating fictional stories that depict difficult topics.

“Treat adults like adults,” the CEO said, explaining the firm’s core philosophy.
Barbara Hill
Barbara Hill

Tech enthusiast and writer with a passion for demystifying complex innovations and sharing practical insights.