tech

February 2, 2026

What we’ve been getting wrong about AI’s truth crisis

Even when content is revealed to be manipulated, it still shapes our beliefs. The defenders of truth are hopelessly behind.

What we’ve been getting wrong about AI’s truth crisis

TL;DR

  • The US Department of Homeland Security is using AI video generators from Google and Adobe for public content, including material supporting mass deportation.
  • Public reaction to manipulated content ranges from unsurprised acceptance to questioning the point of reporting it.
  • The White House posted an altered photo of a woman at an ICE protest, making her appear hysterical.
  • MS Now (formerly MSNBC) aired an AI-edited image of Alex Pretti, which made him look more handsome, though the outlet stated they were unaware it was edited.
  • Tools like the Content Authenticity Initiative, designed to label AI-generated content, have limitations, such as opt-in labeling and the ability for platforms to remove labels.
  • A study in *Communications Psychology* found that participants remained swayed by a deepfake confession even after being told it was fake.
  • Transparency alone is insufficient; a new master plan is needed to address deepfakes and the weaponization of doubt.
  • The advancement and accessibility of AI tools mean that influence can survive exposure, and establishing truth may not be a reset button.