Trending on X: Users Accuse Anthropic of Quietly Degrading Claude Performance
What Happened
An AMD Senior AI Director filed a detailed public complaint analyzing 6,852 Claude Code sessions, showing reads-per-edit collapsing from 6.6x to 2.0x and blind edits jumping from 6.2% to 33.7%. Anthropic's Claude Code lead Boris Cherny responded that the company reduced the default reasoning "effort" to medium based on user feedback about excessive token consumption. Users say the change was made without adequate disclosure. Fortune and VentureBeat both ran pieces on the growing backlash.
My Take
This is the first real "shrinkflation" controversy in AI — the product looks the same but delivers less. Anthropic's explanation (we changed defaults to save tokens) lands differently now that we know enterprise billing just shifted to per-token pricing. A cynical reading: lower the default effort, cut compute costs, then charge by consumption. I don't think that's what happened — the timing is coincidental — but it reveals a fundamental transparency gap. AI companies need versioned release notes the way software companies ship changelogs. Silent downgrades erode trust faster than any benchmark regression.
Read Original Source