About News Writing Resources Contact
All Stories

Apple Threatened to Remove Grok From the App Store Over Deepfake Generation

A letter to U.S. senators revealed that Apple notified xAI it might remove the Grok chatbot from the App Store because the app could generate non-consensual sexualized deepfakes. xAI subsequently restricted those capabilities. The letter, which surfaced in January, is now drawing renewed attention as lawmakers debate platform liability for AI-generated content.

Apple's App Store leverage just became the most effective AI safety enforcement mechanism in the world — more immediate than any regulation, more consequential than any voluntary commitment. When Apple says "fix this or you're gone," companies fix it within days. That's faster than any legislative process. The uncomfortable implication is that AI safety for consumer apps is now effectively governed by one company's content policies. Whether that's good or terrifying depends on how much you trust Apple's judgment calls, but it's undeniably the most efficient enforcement loop we have right now.
Read Original Source