Meta’s Ray-Ban smart glasses were sold as a glimpse of the future. Instead, they’ve triggered an old-fashioned fight over power, privacy, and whose jobs are expendable when Silicon Valley gets embarrassed.

Meta: It’s about “standards”

Meta insists its decision to drop Kenyan annotator Sama is a quality issue, not a cover‑up. A spokesperson told the BBC the company “decided to end our work with Sama because they don’t meet our standards,” after a seven‑year training‑data deal that started in 2017. The company also stresses that subcontracted workers review smart‑glasses content only “to improve people’s experience with the glasses, as stated in our Privacy Policy,” and that faces are blurred in material prepared for review.

On paper, that sounds like routine human‑in‑the‑loop AI work: checking transcripts, ensuring Meta’s AI is “accurate and safe,” and refining responses.

Sama: The fall guy

Sama flatly denies failing any standards. “Sama has consistently met the operational, security and quality standards required across all of our client engagements, and we stand behind the integrity of our work,” it said, adding it was “never notified of any failure to meet Meta’s standards.” The firm says Meta’s cancellation has cost 1,108 jobs in Nairobi, many of them workers already suing Meta over past content‑moderation harms.

Workers suspect the timing isn’t a coincidence: reports that they were forced to watch people “changing their clothes, having sex, and using the toilet” via Ray‑Ban Metas were followed “less than two months” later by Meta pulling the plug.

The bigger picture: everyone’s exposed

For privacy advocates, the scandal shows how “improving user experience” can become a euphemism for strangers reviewing your most intimate moments. For Kenyan workers, it’s a reminder that in the AI supply chain, those at the bottom absorb the trauma—and lose their livelihoods—when things go wrong.