Technology

OpenAI Faces Charges Over Deadly Mass Shooting in Canada

The families of the victims of February’s school shooting in British Columbia filed seven lawsuits Wednesday against OpenAI, the creator of OpenAI. ChatGPT. The lawsuits, filed in federal court in San Francisco, allege OpenAI’s actions in connection with the shooter’s use its AI let the shooting happen.

The lawsuits could have far-reaching implications in the future a chatbot protections and that companies can be held liable for how people use artificial intelligence.

The shooting occurred on February 10 when an 18-year-old former student entered a high school in Tumbler Ridge, British Columbia, and opened fire using a homemade gun, killing five children and an education aide, according to news reports. Detectives suspect that the shooter also killed his mother and half-brother. A combination of accidents made this the deadliest mass shooting in Canadian history. The shooter died at the scene, apparently of a self-inflicted gunshot wound.

The shooter had engaged ChatGPT in violent conversations before the attack.

OpenAI says it has taken steps aimed at addressing the issues raised by the lawsuits.

“We’ve already strengthened our defenses, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and mitigate potential threats of violence, and improving the detection of repeat policy violators,” an OpenAI spokesperson told CNET via email.

OpenAI founder and CEO Sam Altman wrote a letter to the families, which was published on local news site Tumblr RidgeLines.

“The pain your community has experienced is unimaginable,” Altman wrote.

He referenced the ChatGPT shooter’s account, writing, “I’m so sorry we didn’t tell law enforcement about the account that was banned in June.”

CBS News reports that the shooter’s account was flagged in 2025 for abusing ChatGPT for “violent activities” and banned. OpenAI told CBS that it considered flagging the account to law enforcement but decided it “does not pose an imminent and credible risk of serious physical harm to others.”

According to The Guardian, the shooter managed to create a second account that OpenAI didn’t know about until after the shooting.

More OpenAI issues

These are not the only legal and regulatory challenges OpenAI faces with its AI conversational products. Earlier in April, Florida officials announced were investigating OpenAI about whether the shooter who killed two people at Florida State University in Tallahassee used ChatGPT in connection with the attack.

Separately, a March The lawsuit filed by Merriam-Webster and Encyclopedia Britannica alleges that OpenAI improperly used copyrighted material to train its AI systems.

(Disclosure: Ziff Davis, CNET’s parent company, sued OpenAI in 2025, alleging that it infringed on Ziff Davis’ copyrights in training and using its AI programs.)

The company is also navigating a series of product and business pressures, including is shutting down its video production model, Sora and stop work on ChatGPT adult mode.

It has also faced scrutiny from investors after losing some internal cash and user growth targets ahead of a potential public offering.

.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button