OpenAI Had Suspect on Radar but Didn’t Alert Police Before Attack

Last week, Van Rootselaar killed eight people in remote British Columbia before dying by suicide.

TORONTO

Months before an 18-year-old committed one of Canada’s worst school shootings, OpenAI identified his account for “furtherance of violent activities” but decided not to alert police.

The company banned Jesse Van Rootselaar in June 2025 for violating its usage policy. But after internal debate, OpenAI determined the activity did not meet its threshold for law enforcement referral, which requires “imminent and credible risk of serious physical harm.”

Last week, Van Rootselaar killed eight people in remote British Columbia before dying by suicide.

After the shooting, OpenAI reached out to the Royal Canadian Mounted Police with information about the individual and his ChatGPT use. The Wall Street Journal first reported the disclosure.

The case raises fresh questions about when tech platforms should report potentially violent users.

By James Kisoo