tech
February 23, 2026
This system can sort real pictures from AI fakes
Posts from this topic will be added to your daily email digest and your homepage feed.

TL;DR
- Generative AI is making it difficult to distinguish real images from fakes.
- The C2PA (Coalition for Content Provenance and Authenticity) is developing a standard to embed verifiable metadata into digital content.
- This metadata, similar to a 'nutrition label,' provides information about an image's origin and any modifications made.
- Companies like Microsoft, Adobe, Google, and OpenAI support the C2PA standard.
- Camera manufacturers like Sony and Leica are beginning to embed C2PA data, with others pledging to follow.
- Image editing software like Adobe Photoshop and Lightroom can also embed C2PA credentials.
- Online platforms like X and Reddit currently do not widely display C2PA metadata, hindering public verification.
- Even with a standard, issues like metadata stripping (e.g., screenshots) and persistent denialism pose challenges.
- Detection methods for AI-generated content are unreliable, making cryptographic labeling like C2PA a more promising approach.
- The effectiveness of C2PA relies heavily on broad adoption by platforms, hardware, and software.
Continue reading the original article