Logo

'Violent' and 'unacceptable': Elon Musk's X under fire over sexualized deepfakes

Brussels opens a Digital Services Act investigation into X over Grok deepfakes, widening scrutiny while U.S. lawmakers sharpen enforcement tools

Vincent Feuray/Hans Lucas via AFP/Getty Images

Elon Musk keeps selling Grok as a serious AI product. But the past month on X $TWTR has looked more like a stress test run by the worst people online, with sexualized, nonconsensual deepfakes as the proof of concept. And the European Commission is looking into whether X properly assessed and mitigated “systemic risks” tied to Grok-enabled abuse material.

On Monday, Jan. 26, the Commission opened a formal investigation into X over Grok’s image features and the spread of AI-generated nonconsensual sexual imagery — an image functionality X referred to as “spicy mode” — including content involving minors. “Sexual deepfakes of women and children are a violent, unacceptable form of degradation,” the Commission’s tech chief, Henna Virkkunen, said in the press release.

The Commission is bringing its case against Grok under the Digital Services Act (DSA), a law built for exactly this genre of problem: giant platforms whose tools can predictably be used for abuse, then scaled by recommendation systems that are very good at finding whatever keeps people scrolling. The Commission also said that it’s expanding scrutiny of X’s recommender systems, including the platform’s shift toward Grok-based content filtering, because the same system that curates “what’s relevant” can also curate what’s harmful.

The Commission hasn’t set a deadline for the investigation, but if regulators find serious violations, the EU can impose fines of up to 6% of global annual turnover, a number designed to be felt even by companies that treat penalties as a line item. 

This is a question of systemic risk under the DSA — one looking at whether a very large online platform assessed foreseeable harms before shipping a feature, whether it put guardrails in place that actually worked, and whether it acted decisively once abuse became obvious. The Commission’s answer so far appears to be: Show us the paperwork.

Politically, this lands in the EU’s broader push to prove it can actually enforce the DSA against household-name platforms, not just publish elegant PDFs about it. (The European Parliament has been publicly pressing for faster, stronger enforcement in this exact lane.)

X, for its part, insists that it has tightened access and limited features. X says it has “implemented technological measures” to ensure that Grok will no longer edit photos of real people into “revealing clothing such as bikinis” — exactly the sort of claim that lasts exactly as long as it takes someone to try a slightly different prompt. X says the “fix” applies to everyone, even paid users. And the parent company’s latest move also comes with a geographic fine print — X says it’s geoblocking this kind of image editing in places where it’s illegal, conceding two things at once: First, that the capability exists; second, that the constraint may vary depending on whose laws are currently within range of your IP address.

This isn’t the EU’s first escalation in the Grok saga. Earlier this month, Brussels ordered X to retain any and all Grok-related documents until the end of 2026, extending a prior retention order tied to algorithms and illegal content — a move that sounds boring enough until you see what it can enable later. And in the UK, the communications regulator Ofcom has opened an investigation into X over Grok-related sexualized imagery. And in December, X was fined €120 million for DSA transparency breaches (blue check design, ad repository transparency, researcher access).

The pressure is also coming from the other side of the Atlantic. In the U.S., lawmakers are pushing in parallel on a different lever. Congress has already passed the Take It Down Act, which criminalizes the knowing publication of nonconsensual intimate imagery, including AI-generated content, and sets up a formal takedown regime. More significantly, the Senate has advanced the DEFIANCE Act, which would give victims of AI-generated sexual deepfakes a federal civil right of action. 

State attorneys general aren’t waiting for Congress. On Jan. 16, California Attorney General Rob Bonta announced he had sent xAI a cease-and-desist letter demanding it halt “the creation and distribution of deepfake, nonconsensual, intimate images and child sexual abuse material” and noting that “the creation of this material is illegal.” And a bipartisan coalition of more than 30 state AGs has accused Grok of facilitating abuse “as easy as the click of a button.”

That all means that X and xAI are facing the same basic problem in multiple jurisdictions: The argument that moderation can mop up after product decisions keeps colliding with regulators who want to see risk controls before features ship. The Commission’s Virkkunen said the investigation will determine whether X met its obligations under the DSA, “or whether it treated rights of European citizens — including those of women and children — as collateral damage of its service.”

For now, Grok keeps failing upward. xAI continues to raise enormous sums, build compute, and pitch itself as AI infrastructure rather than a consumer chatbot with a rap sheet. X keeps adjusting features rather than removing them. The scandals stack; the machine hums.

This is the EU drawing a bright line around a category of harm that’s both highly gendered and highly scalable: nonconsensual sexual deepfakes, turbocharged by generative tools and distribution algorithms. The underlying message is: If you ship a “spicy” button into a platform with massive reach, you own the downstream consequences — legally, not just reputationally.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.