tech
February 2, 2026
X safety teams ‘repeatedly warned management’ about undressing tools.
While X has long allowed NSFW images, The Washington Post reports that the platform’s content moderation filters couldn’t handle the estimated millions of sexualized deepfakes of real women and children being generated by Grok.

TL;DR
- X's content moderation filters are struggling to handle an estimated millions of sexualized deepfakes generated by Grok.
- These deepfakes include real women and children.
- AI-edited images, unlike typical illegal images, do not automatically trigger existing content warnings.
- Child sexual abuse material, previously identified by matching against databases, is particularly vulnerable to bypass by AI edits.
Continue reading
the original article