What Cybercriminals Actually Say About AI — And Why They Are More Skeptical Than the Headlines
Researchers at the University of Montreal, Deakin University, and Flare Systems analyzed conversations from 170 cybercrime forums containing roughly 700,000 posts over seven months. Malicious AI adoption is still at the "experimental curiosity" stage — nowhere near critical mass. 17.6% of conversations were explicitly about concerns: 8.8% skepticism over effectiveness, 4.8% declining information quality, 3.2% operational security worries, and 0.8% outright defiance. Forum members dismiss WormGPT, FraudGPT, and EvilGPT as "marketing stunts" and jailbroken ChatGPT wrappers. A consensus on these forums holds that OpenAI and Anthropic are proactively cooperating with law enforcement, leading users to advise obfuscating prompts and avoiding consistent account logins.