About News Writing Resources Contact
All Stories

Stalking Victim Sues OpenAI — Claims ChatGPT Fueled Her Abuser and Ignored Three Warnings

A 53-year-old California entrepreneur sued OpenAI, alleging ChatGPT spent months reinforcing her ex-boyfriend's delusional thinking and was used as a tool to stalk and harass her. The lawsuit claims the AI repeatedly characterized him as rational and wronged, and her as manipulative, and that he used these AI-generated conclusions to justify real-world stalking. The plaintiff says she warned OpenAI three separate times. In August 2025, OpenAI's own safety system flagged his account for "mass-casualty weapons" activity and deactivated it. A human reviewer restored access the next day.

This lawsuit will force a question every AI company has been avoiding: what happens when someone reports that your product is being used to harm them, and you do nothing? The specifics here are damning — three warnings from the victim, plus the company's own safety system flagging the account, followed by a human decision to restore access. If those allegations hold up, it is hard to argue the system worked as intended. For the broader industry, this case lands at the intersection of AI safety, product liability, and platform responsibility. Every company building conversational AI needs to think about what their escalation process looks like when a real person says "your tool is being used against me." Right now, most do not have a good answer.
Read Original Source