Irish police probe reports of child abuse imagery made by Grok, but they totally miss the point
Governments worldwide have intensified scrutiny of Grok, the AI chatbot integrated into Elon Musk’s social platform X, after waves of explicit and non-consensual deepfakes—including images depicting minors—were traced to its misuse. Yet, if you think about it with a logic brain, the true culprits are not the algorithms but the individuals deliberately prompting the tool to create illegal material.
Ireland’s Garda National Cyber Crime Bureau confirmed it is investigating roughly 200 reports of AI-generated abusive imagery produced via Grok. Detective Chief Superintendent Barry Walsh condemned the phenomenon as “an abhorrent disregard of personal dignity,” but critics note that the focus should center on prosecuting those who exploited the system rather than criminalizing the technology itself.
European Regulators Tighten Controls
The European Commission ordered X to preserve all internal data on Grok under the Digital Services Act while probing potential breaches. Commission spokesperson Thomas Regnier condemned the deepfake material as “illegal and disgusting,” citing concurrent concerns about Holocaust denial outputs. France expanded an existing inquiry into X, while the UK’s Ofcom warned that violations of the Online Safety Act could cost X up to 10 percent of its global revenue.In Germany, Media Minister Wolfram Weimer urged legal action against X, warning of the “industrialisation of sexual harassment.” However, some European digital policy analysts have warned that blanket restrictions could stifle AI innovation if applied indiscriminately. Italy’s data protection authority and Swedish officials likewise condemned the circulation of synthetic explicit images—especially of public figures—but a regional debate is emerging on whether policing prompts might be a more balanced response.
Expanding Crackdown Across Asia and Beyond
Several Asian governments have taken harsher actions. Indonesia fully blocked Grok’s access, followed by Malaysia, which cited “repeated misuse” to generate obscene material. India’s IT ministry issued a warning to X demanding compliance within 72 hours or risk losing legal immunity protections.Meanwhile, South Korea’s media regulator and Australia’s eSafety Commissioner both opened inquiries into Grok’s safety mechanisms. In the U.S., 43 Texas Democratic lawmakers urged an investigation aligned with the state’s 2025 law banning AI-generated child sexual abuse material. Canada’s AI Minister Evan Solomon stated that platforms must prevent AI-driven harm but stopped short of supporting an outright ban.
X Tightens Restrictions but Defends Technology
In response, X announced new safeguards, restricting Grok’s image-generation tools to verified, paying users. The company warned that anyone using prompts for illegal content would face the same penalties as those who upload such content directly.Elon Musk echoed the point on his X account, asserting that “tools shouldn’t be blamed for human wrongdoing.” A study by the NGO AI Forensics analyzed more than 20,000 Grok-generated images, finding that 53 percent included people in minimal attire and 2 percent involved individuals appearing to be minors—data fueling both the regulatory backlash and the argument for punishing malicious users rather than banning generative tools.