OpenAI staff debated alerting police over violent ChatGPT logs before school shooting

This article was written by AI based on multiple news sources.Read original source →
In the months leading up to a deadly school shooting in Canada, a dozen OpenAI employees internally debated whether to alert law enforcement about a ChatGPT user who was describing gun violence scenarios. According to a report by the Wall Street Journal, the discussions took place in June 2025 after an automated review system flagged the user's posts. The user, later identified as 18-year-old Jesse Van Rootselaar, had engaged with ChatGPT over several days, detailing scenarios involving gun violence. Some OpenAI staff viewed these messages as potential red flags for real-world violence and urged senior management to contact Canadian police. However, OpenAI's leadership ultimately decided against reporting the matter to authorities at that time.
A company spokesperson stated that the activity did not meet the internal threshold for reporting, which requires a "credible and imminent risk of serious physical harm to others." Instead of contacting police, OpenAI chose to suspend the user's account. The incident highlights the complex and often fraught decisions AI companies must make when balancing user privacy with public safety obligations. OpenAI has policies to train its models to steer users away from real-world violence and employs a system where conversations expressing intent to harm are flagged for human review. These reviewers are empowered to involve law enforcement if they determine there is an immediate risk.
The tragic outcome occurred on February 10, when Van Rootselaar was found dead at the scene of a shooting rampage at a school in Tumbler Ridge, British Columbia, apparently from a self-inflicted injury. The Royal Canadian Mounted Police identified her as the suspect in an attack that killed eight people and wounded at least 25 others. Following the attack, OpenAI contacted the RCMP and is now cooperating with the investigation. The case underscores that ChatGPT was not the only platform where Van Rootselaar left digital traces; she also allegedly simulated a mass shooting on the gaming platform Roblox and participated in online discussions about gun enthusiast videos on YouTube.
This event brings into sharp focus the operational and ethical tensions facing AI service providers. Companies like OpenAI are on the front lines, tasked with interpreting often ambiguous user intent within the confines of their terms of service and legal frameworks. The decision not to report was based on a specific, high-threshold policy designed to avoid over-reporting and to protect user privacy. Yet, in retrospect, the logs represented a precursor to a catastrophic real-world event, raising difficult questions about whether current policies and risk-assessment capabilities are sufficient. The aftermath has likely prompted internal reviews at OpenAI and similar firms about how to better identify and act upon potential threats.
The broader implication for the AI industry is a renewed examination of content moderation protocols, especially for generative AI interfaces that can be used for planning or ideation. There is no simple technical or policy solution, as increasing surveillance and reporting could infringe on privacy and free expression, while under-enforcement could have deadly consequences. This incident may accelerate calls for clearer industry-wide standards or regulatory guidance on when and how AI companies should intervene with law enforcement. It also places a spotlight on the immense responsibility shouldered by the human reviewers and trust and safety teams within these organizations, who must make high-stakes judgment calls with limited information.
Key Points
- 1A dozen OpenAI staff discussed alerting Canadian police in June 2025 after a user described gun violence scenarios in ChatGPT.
- 2OpenAI management decided not to report the user, stating the activity did not meet the bar for a "credible and imminent risk of serious physical harm."
- 3The user, Jesse Van Rootselaar, was later identified as the suspect in a February school shooting in British Columbia that killed eight people.
This case forces a critical examination of the protocols AI companies use to assess and act on violent threats, highlighting the tension between user privacy and public safety.