Meta’s futuristic smart glasses were supposed to make AI feel invisible. Instead, they’ve triggered a very visible backlash: a contract meltdown in Nairobi, a privacy panic in Europe and Africa, and fresh questions about who, exactly, has to watch what these glasses see.

The build-up: a seven-year partnership

Meta’s relationship with data annotator Sama wasn’t a quick fling. The Kenya-headquartered firm has been supplying training data services to Meta since 2017, working on video, image, and speech annotation to power the tech giant’s AI systems. Over seven years, Sama became deeply embedded in Meta’s content pipeline, including work tied to Ray‑Ban Meta smart glasses.

By late 2023, those glasses were Meta’s latest bet: AI-enabled eyewear that can record, interpret, and respond to the world around the wearer. Human reviewers in the background—often in the Global South—were the invisible labor making that magic possible.

The Nairobi operation grew large. When the contract finally collapsed in 2026, Sama said 1,108 jobs were wiped out in one shot. Many of those workers were already part of a US$1.6 billion lawsuit against Meta over past content moderation roles and alleged mental health harms, underscoring that this was not a clean, new start but a continuation of a fraught outsourcing model.

February 2024: the private lives behind the lenses

The real rupture began in February 2024. An investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, along with Kenya-based freelance journalist Naipanoi Lepapa, surfaced an uncomfortable claim: Sama workers reviewing Ray‑Ban Meta footage were seeing people in acutely private and intimate situations—changing clothes, having sex, or using the toilet—captured by the smart glasses.

Numerous Sama workers told reporters they were being served “sensitive, embarrassing, and seemingly private footage recorded by the smart glasses” as part of their annotation work. One anonymous employee described a culture where they “are just expected to carry out the work” even when faced with plainly private content.

The Swedish and Kenyan reporting broke open how Meta’s human-in-the-loop AI systems actually function: not just on bland training data, but on real people’s lives filmed from a face-mounted camera, pushed across borders to low-paid workers.

Meta’s defense: it’s in the privacy policy

Meta’s public line has been consistent and carefully lawyered. A spokesperson told the BBC that “subcontracted workers review content captured by the glasses to improve people’s experience with the glasses, as stated in our Privacy Policy.”

Meta argues that the review process is constrained and sanitized. Faces, it says, are blurred in material prepared for annotators. The company insists that, by default, “media stays on the user’s device,” and only when “people share content with Meta AI” does some of that data go to contractors to “improve people’s experience, as many other companies do.”

On paper, this frames the system as opt-in and industry-standard. In practice, staff reports cited in coverage suggest that the blurring and filtering can fail—especially in low light, like bedrooms, or when the camera moves quickly. In other words, the exact environments where people are most likely to be naked, intimate, or otherwise assume they’re unobserved.

April 2024: the contract suddenly ends

Within “less than two months” of the February expose hitting Swedish media, Meta pulled the plug. According to reporting based on BBC accounts, the company ended its contract with Sama shortly after the workers’ complaints became public.

Meta’s official explanation? Performance.

A spokesperson told the BBC that Meta “decided to end our work with Sama because they don’t meet our standards.” When Ars Technica asked how, exactly, Sama had failed to meet those expectations, Meta did not offer specifics.

The timing, however, raised eyebrows—especially in Nairobi.

Sama’s rebuttal: we met every standard

Sama flatly rejects the idea it fell short. In a statement shared with Ars Technica, the company said:

“Sama has consistently met the operational, security, and quality standards required across all of our client engagements, and we stand behind the integrity of our work. Our focus is on supporting our employees during this transition while continuing to deliver for our clients.”

In a separate statement referenced by AI Magazine, Sama doubled down:

“Sama has consistently met the operational, security and quality standards required across our client engagements, including with Meta. At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work.”

To the company and many of its workers, the logic is obvious: they did what they were contracted to do, then got cut loose once their testimony about what they saw became a reputational threat. BBC reporting noted that Sama workers believe Meta ended the contract because they spoke out about being forced to watch “private footage shot from Ray‑Ban Metas,” including sex and toilet use.

Sama also says the impact was brutal and abrupt: 1,108 employees in Nairobi made redundant, with some workers saying they received just six days’ notice.

Inside the annotation machine

Strip away the corporate statements, and a clearer picture of the pipeline emerges.

Sama’s annotators were reviewing transcripts of interactions between users and Meta’s AI to verify whether responses were “accurate and safe,” part of a standard human-in-the-loop loop used to refine AI models. That work, in Meta’s telling, is bounded by privacy controls, blurring, and user consent.

But the Swedish investigation and worker testimonies indicate that safeguards fail in non-trivial ways, exposing annotators to raw, unfiltered scenes of people’s lives.

For the workers, this is not an abstract ethics problem; it is the daily reality of their job. Many are already plaintiffs in a multibillion-dollar lawsuit over mental health harms from prior content moderation work for Meta. The Ray‑Ban footage, while less overtly violent than the worst of Facebook content moderation, raises similar questions: what do “standards” mean if the standard task is to stare at other people’s most private moments?

Regulators step in

The controversy has not remained confined to corporate PR and worker testimonies. Data protection authorities in the UK and Kenya have opened investigations or inquiries into how Meta and its contractors handle smart glasses footage and related data.

For regulators, the key concerns cut in multiple directions:

  • Consent and expectation: Do people filmed by Ray‑Ban Meta glasses—especially bystanders—have any meaningful awareness that their actions might be watched by offshore annotators?
  • Purpose limitation: Even if users share clips with Meta AI, does that reasonably extend to intimate footage being used as training data?
  • Cross-border transfers: What safeguards exist when data moves from European or US users to Kenyan contractors?

The case is quickly becoming a test for how far wearable AI companies can push “improvement” as a justification for human review.

Diverging narratives: standards vs scapegoating

By early May 2026, the narratives had hardened.

  • Meta’s story is that its privacy policy discloses the possibility of human review; that data is blurred and controlled; that Sama simply “didn’t meet our standards”; and that the company is doing what “many other companies do” in using contractors to improve AI systems.

  • Sama’s story is that it met every requirement, was never warned of any deficiencies, and is now being quietly dropped after workers spoke out about the uncomfortable truth of smart glasses content review. The firm emphasizes its “operational, security and quality standards” and its commitment to employees caught in the crossfire.

  • Workers’ story is that of déjà vu: another wave of precarious digital laborers in Kenya asked to absorb the psychological and ethical cost of Silicon Valley’s products, then laid off with minimal notice and blamed when their experiences become public.

All three stories can’t be fully true at once. Either Sama repeatedly failed yet somehow escaped warnings for years, or Meta is using a vague “standards” line to distance itself from the optics of intimate footage being reviewed in Nairobi.

What this means for the future of smart glasses

Meta’s Ray‑Ban partnership was supposed to normalize camera-on-your-face computing. Instead, it has exposed the messy, human infrastructure needed to make that vision work.

As investigations proceed, the central tension is stark: every frictionless AI experience is built on the backs—and sometimes the mental health—of people who must watch the outtakes. If regulators decide that intimate footage routed to global annotation shops crosses a line, the entire business model for human-reviewed wearable AI could be forced to change.

For now, Meta can drop Sama and find another vendor. But it can’t so easily escape the question hanging over the Ray‑Ban glasses: when you hit record, who, exactly, is watching you later?


1. Meta cuts contractors who reported seeing Ray-Ban Meta users have sex — Reporting on Sama workers viewing “sensitive, embarrassing, and seemingly private footage” from Ray‑Ban Meta glasses and Meta’s claim that Sama didn’t meet its standards.

2. Data Privacy: Why Meta Called it Quits with Sama — Coverage of Meta ending its seven-year data annotation deal with Sama, the 1,108 job cuts in Kenya, and disputes over privacy safeguards and performance standards.