ACLU leads 75 groups demanding Meta drop face recognition from Ray-Ban glasses
A coalition letter to CEO Mark Zuckerberg calls the planned "Name Tag" feature a "red line society must not cross"

SAUL LOEB / Getty Images
The ACLU and 75 other organizations sent an open letter to Meta $META CEO Mark Zuckerberg demanding the company abandon plans to equip its Ray-Ban and Oakley smart glasses with facial recognition technology, calling the feature an "unacceptable threat to privacy and liberty."
Rather than seeking guardrails or design tweaks, the coalition — whose members span civil liberties, domestic violence, reproductive rights, labor, and immigrant advocacy and include the ACLU of Massachusetts, the New York Civil Liberties Union, the Electronic Privacy Information Center, Fight for the Future, and the National Organization for Women, among dozens of others — is calling for an outright cancellation of the project. Their letter to Zuckerberg asserts that the hazards posed by facial recognition built into everyday eyewear are not the kind that can be addressed through "product design changes, opt-out mechanisms, or incremental safeguards."
Related Content
Wired reports that the feature, called "Name Tag" within the company, would work through the AI assistant in Meta's smart glasses. It would let users get information about people they meet. There are two possible versions: one would only recognize existing contacts on Meta's platforms, while the other could identify anyone with a public account on services like Instagram.
"The American people have not consented to this massive invasion of privacy," said Kade Crockford, director of technology and justice programs at the ACLU of Massachusetts, in a statement. "Stalkers and scammers would have a field day with this technology. Federal agents could use it to harass and intimidate their critics."
Beyond scrapping the feature, the letter puts two additional demands on Meta: that the company come clean about whether its wearable devices have already appeared in stalking, harassment, or domestic violence incidents, and that it be transparent about any conversations — past or current — it has held with federal law enforcement bodies such as Immigration and Customs Enforcement and Customs and Border Protection regarding its glasses or the data they collect.
The letter also takes direct aim at an internal Reality Labs memo, surfaced by The New York Times, in which Meta employees wrote of timing the rollout to coincide with a moment when advocacy organizations would be stretched thin by other fights — characterizing the political climate as one where groups "that we would expect to attack us would have their resources focused on other concerns." Signatories branded that calculation "vile behavior" and accused the company of cynically capitalizing on "rising authoritarianism."
"Our competitors offer this type of facial recognition product, we do not," a Meta spokesperson said. "If we were to release such a feature, we would take a very thoughtful approach before rolling anything out."
This would not be the first time public pressure made Meta change its approach to face recognition. In late 2021, Facebook shut down its photo-tagging feature and deleted the faceprint data of over a billion users. This decision followed a series of costly legal battles, including about $2 billion paid to settle biometric privacy claims in Illinois and Texas, and a $5 billion payment to the FTC in a broader privacy settlement that included face recognition. According to Engadget, this was the largest penalty the agency had ever issued at the time.
The groups noted that Meta has paid more than $7 billion in total settlements and fines for privacy violations in recent years.