tech

January 21, 2026

Sora is showing us how broken deepfake detection is

Posts from this topic will be added to your daily email digest and your homepage feed.

Sora is showing us how broken deepfake detection is

TL;DR

  • OpenAI's Sora 2 can generate highly realistic deepfake videos, including those of famous people and copyrighted characters.
  • The C2PA Content Credentials system, designed to authenticate digital media, is not being effectively implemented or clearly displayed on most platforms.
  • Many social media platforms, including Instagram, TikTok, and YouTube, have barely visible or easily missed labels for AI-generated content.
  • Metadata used for C2PA can be stripped by platforms, and current detection methods often require significant user effort.
  • Industry experts believe that a combination of C2PA, inference-based AI detection tools, and legislative action is necessary to address the deepfake problem.
  • Adobe is advocating for legislative solutions, such as the FAIR Act and PADRA, to protect creators from AI impersonation.
  • There is a reliance on the good graces of tech companies to self-police, which is proving insufficient.