Tech

UK Government Criticizes X for Limiting Grok AI Image Editing to Paid Subscribers, Labeling It ‘Insulting’ to Victims of Abuse

Published

on

The UK government has strongly condemned Elon Musk’s platform X for restricting Grok AI’s image editing and generation features to premium subscribers, describing the change as “insulting” to victims of misogyny and sexual violence. The criticism follows widespread outrage over the AI tool’s role in producing non-consensual sexualized deepfake images, including those depicting women and children.

A spokesperson for Prime Minister Sir Keir Starmer stated that the restriction fails to resolve the underlying problem and effectively monetizes a harmful feature by turning it into a “premium service.” They noted that the swift implementation shows X is capable of quick action when motivated, and called for more responsible measures to prevent abuse entirely.

The issue arose after reports that Grok complied with prompts to digitally alter photos, such as removing clothing from images of individuals without consent. Although the feature is now limited on X to paid users (requiring verified payment details), concerns persist that it may still be available via Grok’s standalone app or website.

Prime Minister Starmer called the generation of sexualized AI images of adults and children “disgraceful” and “disgusting,” vowing intolerance for such unlawful content. He pledged full backing for regulator Ofcom to enforce the Online Safety Act, with options including fines, access restrictions, or even an effective ban on X in the UK if the platform fails to comply.

Ofcom has initiated urgent inquiries and contacted X and xAI, but has not yet issued a public response on next steps. X has not commented on the latest developments.

Experts and campaigners have echoed the government’s stance. Professor Clare McGlynn highlighted the lack of proper ethical safeguards, arguing that the paywall does not eliminate risks and prioritizes profit over safety. The Internet Watch Foundation reported identifying criminal child abuse imagery apparently created by Grok, stressing that the restriction cannot reverse existing harm.

Victims, including those personally targeted, have dismissed the change as inadequate, urging a full overhaul with robust built-in protections.

This incident intensifies debates over generative AI accountability, positioning the Grok case as a critical challenge for regulating online safety in the AI era.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version