About News Writing Resources Contact
All Stories

Bruce Schneier Highlights Research on How Cybercriminals Actually Talk About AI

Bruce Schneier highlighted a research paper analyzing over 160 cybercrime forum conversations about AI collected over seven months. The study found growing curiosity about AI's criminal applications — both through misusing legitimate tools and building bespoke criminal models — but also widespread skepticism about effectiveness and concerns about operational security. Criminals worry that AI tools could compromise their own anonymity or disrupt established business models.

This paper deserves more attention than it will get. The conventional narrative is that AI supercharges cybercrime; the reality is more nuanced. Criminals face the same adoption friction as legitimate enterprises — tooling is immature, workflows don't integrate cleanly, and the risk-reward calculus is unclear. The operational security concern is particularly interesting: if your criminal AI tool phones home to an API, you've just given a tech company a log of your activities. The defenders' advantage in 2026 may be less about better AI and more about the fact that attackers can't safely adopt it as fast.
Read Original Source