Skip to content
Black rectangular eyeglasses with a glossy frame lie on a white surface. The simple design conveys functionality and classic style.

Meta Faces UK And US Investigations Over AI Smart Glasses Privacy Concerns

Regulators in the United Kingdom and a federal class-action lawsuit in the United States are targeting Meta following reports that human contractors reviewed intimate footage captured by Ray-Ban Meta smart glasses.

The Rapid Expansion Of Wearable AI Technology

Meta’s push to make smart glasses the next dominant computing platform has hit a significant regulatory and legal hurdle. Following an investigative report by Swedish news outlets Svenska Dagbladet and Göteborgs-Posten, the company is now facing an official inquiry from the United Kingdom’s Information Commissioner’s Office (ICO) and a class-action lawsuit in the United States. The controversy centers on the revelation that human contractors, primarily based in Kenya, have been reviewing highly sensitive and intimate footage captured by the devices to train Meta’s artificial intelligence models.

As businesses and creators increasingly adopt wearable technology for hands-free content creation and real-time information gathering, these investigations highlight a critical friction point: the balance between AI utility and user privacy. For small businesses and media teams using these tools for field recording or internal documentation, the lack of transparency regarding data processing could represent a significant liability.

The Findings Of The Swedish Investigation

The investigation revealed that outsourced workers at Sama, a third-party contractor in Nairobi, were tasked with labeling video and audio data to improve the accuracy of Meta’s AI. However, these workers reported being exposed to footage that users likely never intended to share. Reports include recordings of individuals in private settings, such as bathrooms and bedrooms, as well as the capture of sensitive financial information like credit card details.

Meta has consistently marketed the Ray-Ban Meta glasses with the slogan "designed for privacy," emphasizing a built-in LED light that signals when the camera is active. Despite these assurances, the reports suggest that the "face anonymization" and blurring technologies intended to protect bystanders frequently fail. This failure allows human reviewers to see identifiable faces and private surroundings, contradicting the company’s public-facing privacy guarantees.

Regulatory Pressure In The United Kingdom

The UK’s data protection watchdog, the ICO, has formally written to Meta seeking "urgent clarification" on how the company meets its obligations under UK data protection law. The regulator expressed concern that devices processing personal data must put users in control and provide absolute transparency. Of particular interest is whether Meta clearly explained that user data—including video and audio—would be used to train AI systems via human review.

Under current regulations, service providers are required to inform users explicitly about what data is collected and who has access to it. If the ICO finds that Meta’s consent process was insufficient or misleading, the company could face significant fines and be forced to alter its data-handling workflows. This serves as a warning to other technology providers that "AI-powered" does not exempt a product from traditional privacy standards.

The US Class-Action Lawsuit And Claims Of False Advertising

Simultaneously, a federal class-action lawsuit was filed in San Francisco on March 4, 2026, on behalf of users in California and New Jersey. The plaintiffs allege that Meta engaged in false advertising by claiming the glasses were "controlled by you" while secretly routing footage to overseas servers for manual review. The legal challenge argues that no reasonable consumer would expect their most private moments to be viewed by strangers for the purpose of model training.

The lawsuit highlights a growing trend of "AI-era surveillance harms," where the data pipeline required to fuel modern AI tools is deeply invasive. For creators and businesses, this legal battle underscores the importance of vetting the tools they use. Relying on marketing materials alone may not be enough to ensure that a production workflow remains compliant with privacy expectations or legal requirements.

The Future Of AI Wearables And Data Sovereignty

Meta has responded by stating that media remains on the user's device unless they choose to share it with Meta AI to answer a query. The company maintains that human review is a standard industry practice used to improve product quality. However, the outcry from both users and contractors suggests that the current safeguards are inadequate for a device that is meant to be worn throughout the day in private spaces.

As the industry moves toward more integrated AI tools, the focus on data sovereignty—where the user maintains total control over their information—will become paramount. To eliminate friction in storytelling, creators need to trust that their raw footage is secure. Businesses must also consider the implications of their employees wearing these devices in sensitive work environments where trade secrets or private client data could be inadvertently captured and sent to third-party reviewers.

More about business:

The Resurgence of Pre-Loved Audio: Why Second-Hand Hi-Fi is Popular Today
Industry analysis from Hi-Fi Plus Issue 253 explores the growing value of the second-hand audio market for sustainable growth and listener loyalty.
Deciphering the Algorithm: How Apple and Spotify Podcast Charts are Calculated in 2026
Modern podcast rankings prioritize subscriber velocity and listener retention over all-time download numbers.
Netflix Acquires InterPositive to Integrate Ben Affleck’s AI Filmmaking Tools
The streaming giant brings Ben Affleck’s stealth AI startup in-house to enhance post-production workflows and creative control.

Comments

Latest