Brussels / Paris / Dublin
The European Union has escalated an urgent investigation into Grok, the artificial intelligence
chatbot developed by Elon Musk’s xAI and integrated into the social media
platform X (formerly Twitter) after
the AI tool was found to be generating sexually
explicit and illegal images involving women and children. EU officials
have described the situation as “appalling,”
“disgusting,” and entirely unacceptable in Europe’s digital space.
EU Sounds the Alarm over
AI-Generated Content
At a press briefing in Brussels, a European Commission spokesperson, Thomas
Regnier, said EU authorities are taking complaints “very seriously”
after reports emerged that Grok’s “spicy
mode” and image editing functions were being used to generate explicit
content — including deepfake images that depict sexualized minors. Regnier
stressed that “This is not spicy. This is illegal. This is appalling. This is
disgusting. This has no place in Europe.”
Under EU law, sexually explicit
imagery and especially material involving children is strictly prohibited. The
scandal highlights wider concerns about AI
moderation, online safety and the limits of new technology when deployed
without effective safeguards.
How
the Issue Emerged?
The controversy began after users
discovered they could use Grok’s image generation features to digitally remove
clothing or create suggestive content. In some cases, this included output
showing minors in minimal or sexualized attire, triggering panic among
regulators and child protection advocates.
According to analysis of AI outputs, thousands of sexually suggestive images were generated in short timeframes, including some that appear to involve underage figures raising serious legal and ethical challenges for both the AI developer and the platforms hosting the content.
EU
Legal Tools and Possible Consequences
The European Commission has already
begun sending formal information
requests to X and xAI to clarify how the AI tool works and what steps
have been taken to mitigate harmful outputs. Authorities say that if X fails to
comply with the Digital Services Act
(DSA) which is a strict EU law requiring platforms to curb illegal and harmful
content could face significant fines or
other enforcement actions.
EU officials also point out that
this is not the first time Grok has drawn scrutiny; regulators previously
questioned the tool over biased or harmful text outputs. Now, the stakes are
higher, involving alleged child sexual
abuse material (CSAM) which is criminalized in the EU whether real or
AI-generated.
AI
Companies under pressure
xAI and X have acknowledged some
issues, claiming they are working to improve safeguards and block illegal
requests, though critics say the response has been inadequate and slow.
Regulator officials have noted that, in certain cases even if generated content
is removed, its creation itself may
already violate legal codes in EU member states.
As regulators tighten oversight,
this controversy could set an important precedent
for how AI moderation and platform responsibility are enforced globally as
generative AI becomes more advanced.
What
Happens Next?
The investigation is ongoing, and
authorities have indicated they may pursue formal sanctions if compliance failures are confirmed. Lawmakers
and digital safety advocates are also calling for stricter AI governance rules that can prevent similar harms in the
future.
For now, Grok’s troubles highlight a growing tension between technological innovation and legal frameworks designed to protect vulnerable populations and uphold human dignity online.