tech
January 12, 2026
A New Jersey lawsuit shows how hard it is to fight deepfake porn
For more than two years, an app called ClothOff has been terrorizing young women online — and it’s been maddeningly difficult to stop. The app has been

TL;DR
- The app ClothOff creates non-consensual deepfake pornography, primarily targeting young women.
- Lawsuits are attempting to shut down ClothOff, but identifying and serving the perpetrators, who operate globally, is a major obstacle.
- AI-generated child abuse material is illegal but difficult to police on platforms, unlike individual users.
- Local authorities have declined to prosecute cases involving ClothOff due to the difficulty of obtaining evidence.
- General-purpose AI tools like xAI face different legal challenges regarding accountability for user-generated content, often protected by the First Amendment unless intent to harm is proven.
- While Child Sexual Abuse Material (CSAM) is not protected speech, proving a general AI platform's knowledge or intent to facilitate CSAM is complex.
- Countries outside the US, such as Indonesia, Malaysia, and the UK, are taking steps to block or investigate AI chatbots like Grok due to concerns over harmful content.
- Key questions remain regarding what platforms like X (formerly Twitter) knew about the misuse of their AI tools and what actions they took or failed to take.
Continue reading
the original article