tech
February 19, 2026
Microsoft has a new plan to prove what’s real and what’s AI online
A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn’t committed to following its own recommendations.

TL;DR
- AI-enabled deception is increasingly prevalent online, from manipulated images to deepfakes.
- Microsoft has developed a blueprint recommending technical standards for documenting digital manipulation.
- The proposed methods include provenance, watermarking, and digital fingerprints, aiming to verify content authenticity.
- The effectiveness of these tools is being tested against advanced AI developments like hyperrealistic models and interactive deepfakes.
- Legislation like California's AI Transparency Act is driving the need for such verification standards.
- Experts believe the blueprint could significantly reduce misleading content, though it won't solve the problem entirely.
- There are concerns about industry adoption, especially if standards threaten business models, and about public trust if tools are flawed or inconsistently applied.
- The tools are designed to show if content has been manipulated, not to determine its factual accuracy.
Continue reading the original article