About News Writing Resources Contact
All Stories

What Cybercriminals Actually Say About AI — And Why They Are More Skeptical Than the Headlines

Researchers at the University of Montreal, Deakin University, and Flare Systems analyzed conversations from 170 cybercrime forums containing roughly 700,000 posts over seven months. Malicious AI adoption is still at the "experimental curiosity" stage — nowhere near critical mass. 17.6% of conversations were explicitly about concerns: 8.8% skepticism over effectiveness, 4.8% declining information quality, 3.2% operational security worries, and 0.8% outright defiance. Forum members dismiss WormGPT, FraudGPT, and EvilGPT as "marketing stunts" and jailbroken ChatGPT wrappers. A consensus on these forums holds that OpenAI and Anthropic are proactively cooperating with law enforcement, leading users to advise obfuscating prompts and avoiding consistent account logins.

The dominant narrative is that AI is turbocharging cybercrime. This paper — which actually read what criminals are telling each other — tells a much more interesting story. The same trust problem slowing AI adoption inside Fortune 500 companies is slowing it inside the criminal underground, and for structurally identical reasons: the output quality isn't good enough yet, the tools feel like vaporware, and sending sensitive prompts through a third-party server feels like career suicide. The real adoption vector won't be WormGPT 2.0. It will be locally-run open-weight models — likely Chinese releases — that sidestep the OpSec problem entirely. Watch for the first cybercrime tool built on a leaked Qwen or GLM derivative to dominate forum discourse by the end of the year.
Read Original Source