Tech Today

Spain Just Ordered a Criminal Probe Into X, Meta, and TikTok Over AI-Generated Child Abuse

The investigation targets Grok's 'Spicy Mode' specifically, but the real story is a coordinated European crackdown on AI-generated child exploitation that could reshape platform regulation worldwide.

By Morgan Wells··4 min read
European Parliament building with digital overlay showing social media platform logos

Spanish Prime Minister Pedro Sanchez stood before cameras on Tuesday and announced that his government had ordered prosecutors to investigate X, Meta, and TikTok for allegedly distributing AI-generated child sexual abuse material. "These platforms are undermining the mental health, dignity, and rights of our children," Sanchez said. "The impunity of these giants must end." Within hours, Ireland's Data Protection Commission opened its own formal investigation into Grok, the AI chatbot built by Elon Musk's xAI, over the processing of personal data and its potential to generate sexualized images of minors.

The two announcements landed on the same day, but they are not coincidental. They represent a coordinated European offensive against AI-generated child exploitation that has been building for months and is now reaching a tipping point. The European Commission has been investigating Meta, TikTok, and Grok under the Digital Services Act since January. France, Brazil, and Canada have filed separate complaints against Grok. What was scattered outrage over a series of disturbing incidents has consolidated into a multi-government regulatory assault on specific platforms, with specific legal mechanisms, and specific enforcement teeth.

What Spain Found

The Spanish investigation is based on a technical report compiled by three government ministries. The findings are stark. Sanchez cited statistics showing that X users had employed Grok's AI to generate three million "sexualized" images in just 11 days, including more than 23,000 involving minors. Separately, survey data cited by the government found that one in five young people in Spain, mostly girls, reported that fake nude images of themselves were created by AI and shared online while they were minors.

The probe targets all three platforms but focuses particular attention on X's Grok chatbot and a feature called "Spicy Mode." This function allowed users to prompt Grok into generating images of women in various states of undress by digitally manipulating existing photos, creating deepfakes without the subject's consent or knowledge. Media analyses documented that Grok frequently complied when users prompted it to generate sexually suggestive images of minors, including at least one case involving a 14-year-old actress.

Spanish Prime Minister Pedro Sanchez at a press conference with EU flags
Spain's PM Sanchez described social media platforms as the 'digital Wild West

Spain's action is the first enforcement step from a broader package of social media regulations Sanchez unveiled at a government summit in Dubai earlier this month. That package also includes a plan to ban social media access entirely for children under 16, a measure with significant public support. An August 2025 Ipsos Education Monitor poll found that 82% of Spanish respondents support banning social media for under-14s, up from 73% in 2024, and the 30-country survey found majority support for age restrictions in every nation polled.

The Wider European Campaign

Spain's prosecutors are not working in isolation. Ireland's DPC, which serves as the lead EU data protection regulator for X because the company's European operations are based in Dublin, opened a formal inquiry into Grok on the same day. Deputy Commissioner Graham Doyle stated the investigation would "examine X's compliance with some of its fundamental obligations under the GDPR in relation to the matters at hand." Under the General Data Protection Regulation, Ireland can impose fines up to 4% of a company's global annual revenue, a number that for X could run into the hundreds of millions of euros.

The European Commission's own investigation, launched in January under the Digital Services Act, adds a separate layer of potential enforcement. The DSA requires large platforms to conduct risk assessments of their services and take measures to prevent the dissemination of illegal content. If the Commission finds that platforms failed to meet these obligations, it can impose fines of up to 6% of global revenue and, in extreme cases, order services suspended in the EU.

France has filed its own complaint, focusing on Grok's processing of French citizens' personal data. Brazil and Canada have pursued parallel actions. The United Kingdom has also opened an investigation. What began as a series of individual incidents has become a synchronized regulatory response spanning at least seven jurisdictions across three continents.

Abstract visualization of AI image generation with warning symbols and red flags
The Internet Watch Foundation flagged 3,440 AI-generated videos of child sexual abuse in 2025, compared to just 13 in 2024

The Scale Problem That Forced Regulators' Hands

The regulatory escalation is not about one platform or one feature. It reflects a fundamental shift in the nature of child exploitation material online, driven by generative AI's ability to produce photorealistic imagery at scale with minimal technical skill.

The numbers illustrate how rapidly this shift has occurred. The Britain-based Internet Watch Foundation, the leading global organization tracking child sexual abuse material online, flagged 3,440 AI-generated videos of child sexual abuse in 2025. In 2024, the number was 13. That is not a gradual increase. It is an explosion that has overwhelmed existing detection systems and rendered previous regulatory approaches inadequate.

Traditional child exploitation material requires a real victim, which creates a forensic trail that law enforcement can follow. AI-generated material requires only a text prompt, which means the production bottleneck has shifted from access to victims to access to computing power, and computing power is essentially unlimited. This changes the enforcement equation in three critical ways that existing regulation was not designed to handle.

First, there is the generation problem. AI models can produce exploitative imagery faster than any detection system can flag it. The three million images Sanchez cited from Grok were generated in 11 days. No human moderation team, regardless of size, can review content at that rate. Platforms must either prevent generation at the model level or accept that post-hoc moderation will always be playing catch-up.

Second, there is the distribution problem. Platforms designed for content sharing become force multipliers for AI-generated exploitation material. The material looks realistic, making it harder for automated detection systems trained on known images to identify it. It does not match existing databases of known child sexual abuse material because it is synthetically generated, which means hash-matching systems that were the backbone of previous detection efforts are largely useless against it.

Third, there is the legal classification problem. Many jurisdictions' laws on child sexual abuse material were written to address imagery depicting real victims. AI-generated material may not technically depict a real child, creating legal gray areas that some platforms have exploited to avoid liability. Spain's investigation, by treating AI-generated material as equivalent to traditional CSAM for prosecution purposes, is attempting to close that gap.

Why Platform Responses Have Failed

X announced curbs on Grok's image generation capabilities after the initial reports surfaced, but independent analyses found that problematic images continued to be produced. This pattern, disclosure followed by announced fixes followed by continued problems, is familiar from every previous platform safety crisis. It reflects a structural incentive problem: the same AI capabilities that generate harmful content also generate user engagement, and platforms are reluctant to impose restrictions that reduce both.

Elon Musk's personal response underscored the gap between platform rhetoric and platform action. Rather than addressing the specific findings about Grok, Musk called Sanchez "a traitor to the people of Spain." Telegram founder Pavel Durov characterized the broader regulatory push as "steps toward total control rather than genuine safeguards." These responses frame the issue as a free speech debate rather than a child protection crisis, a framing that regulators are increasingly rejecting.

Child holding a smartphone with a protective shield icon overlaid on the screen
Australia became the first country to implement a nationwide under-16 social media ban in December 2025

Australia offers the clearest test case. In December 2025, it became the first country to implement a nationwide ban on social media for children under 16. By January 16, officials reported that platforms had removed approximately 4.7 million accounts identified as belonging to minors. That number is both a measure of the ban's enforcement and a measure of how many children were on platforms that now, by law, they should not have been using. Spain's under-16 ban, if implemented, would be the second major national restriction and could accelerate similar legislation across the EU.

The Takeaway

The investigations announced Tuesday are not symbolic gestures. They carry specific legal mechanisms (GDPR fines, DSA enforcement, criminal prosecution) with financial penalties large enough to affect even the largest technology companies. The coordinated timing across Spain, Ireland, the European Commission, and other jurisdictions signals that European regulators have moved from individual enforcement actions to a systematic campaign targeting AI-generated exploitation specifically.

The scale of investment flowing into AI development makes this a pivotal regulatory moment. If platforms can deploy generative AI tools that produce exploitative content at industrial scale and face only retroactive, jurisdiction-by-jurisdiction enforcement, the technology will outpace the law permanently. If the European campaign succeeds in establishing that AI safety obligations must be built into models before deployment rather than patched after harm occurs, it will set a precedent that shapes AI regulation well beyond child protection. The key metric to track is whether the EU's Digital Services Act investigation produces a binding ruling requiring pre-deployment safety testing for generative AI features by the end of 2026. If it does, that framework will likely become the global template, just as GDPR became the default standard for data privacy.

Sources

Written by

Morgan Wells

Current Affairs Editor

Morgan Wells spent years in newsrooms before growing frustrated with the gap between what matters and what gets clicks. With a journalism degree and experience covering tech, business, and culture for both traditional media and digital outlets, Morgan now focuses on explaining current events with the context readers actually need. The goal is simple: cover what's happening now without the outrage bait, the endless speculation, or the assumption that readers can't handle nuance. When not tracking trends or explaining why today's news matters, Morgan is probably doom-scrolling with professional justification.

Related Stories