tech
February 23, 2026
Does Big Tech actually care about fighting AI slop?
Posts from this topic will be added to your daily email digest and your homepage feed.

TL;DR
- Instagram head Adam Mosseri expressed concern about AI's ability to replicate authenticity and proposed labeling real media through cryptographic signatures.
- The C2PA standard, supported by major tech companies, aims to authenticate content by attaching metadata at the point of creation, but its implementation is seen as insufficient and easily bypassed.
- AI's increasing ability to mimic reality threatens creators' livelihoods and can be used to spread misinformation, with current detection and labeling systems proving inadequate.
- Provenance-based solutions like C2PA require universal adoption across all stages of media creation and hosting, which is considered unachievable.
- Metadata used by C2PA can be intentionally or accidentally removed, and platforms like TikTok and LinkedIn struggle to reliably tag C2PA-compliant content.
- X (formerly Twitter) withdrew from the C2PA initiative, and its parent company, Meta, continues to develop and promote AI tools while claiming to combat AI fakery.
- Companies profit significantly from AI generation tools, creating a conflict of interest that hinders genuine efforts to control misinformation.
- The effectiveness of transparency warnings and labeling for AI-generated content is questioned, with studies showing little empirical evidence of their impact.
- Some platforms are shifting focus to analyzing creators rather than just content, but this approach also faces challenges and potential conflicts of interest.
Continue reading the original article