It’s 2026, yet here we are, still explaining the most fundamental principle of human dignity: consent. The Grok scandal has laid bare not just the recklessness of tech billionaires like Elon Musk, but the rot at the heart of how our society still views women’s autonomy over their own bodies and images.
Over the past weeks, untold numbers of women and children have had their images sexualised online by Grok, Musk’s AI chatbot integrated into X. Users discovered they could tag the bot with simple prompts like “put her in a bikini” or “remove her clothes,” creating a flood of nonconsensual sexualised deepfakes.
Let’s be absolutely clear: this is digital sexual violence. And the response from Musk? He claimed he was not aware of any “naked underage images” made by Grok. This from someone who, by multiple accounts, practically lives on his own platform. The “I didn’t know” defence from billionaire CEOs rings hollow when their platforms weaponise technology against women and girls for profit.
The confusion or wilful ignorance around consent displayed in comments on X under deepfake images is staggering. Consent is agreement or permission expressed through affirmative, voluntary words or actions that are mutually understandable to all parties involved. Critically, consent is never implied by things like your past behaviour, what you wear, or where you go.
UK media personality Narinder Kaur, who has been targeted with sexually explicit AI-generated content using Grok, experienced users defending the technology, arguing that because she posts pictures of herself in short skirts or swimwear, she has no reason to object to Grok stripping her clothes off in images she never consented to. This is the same victim-blaming logic used to dismiss survivors of sexual violence: “Look how she dressed. She was asking for it.”
No. A woman posting a photo of herself in a bikini on her own social media is exercising her choice, her agency, her consent. That image belongs to her. Someone using AI to manipulate that image, to strip her bare without permission, is committing an act of sexual violence full stop. The difference is consent. When a woman chooses to share an image of herself, she is making a decision about her own body and representation. When someone else manipulates that image against her will, particularly to sexualise or humiliate her, they are violating her autonomy. This isn’t complicated. This is basic respect for human dignity.
This is not a neutral technology problem. This is a gender-based violence problem, enabled and amplified by technology. Deepfake pornography accounts for approximately 98 per cent of all deepfake videos online, with 99 per cent of the targets being women.
The international response has been swift and forceful. Indonesia and Malaysia temporarily blocked Grok, while California Attorney General Rob Bonta launched an investigation into xAI, stating that “the avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking”. The UK’s Ofcom opened a formal investigation, France referred the matter to prosecutors, and the European Commission demanded answers. Even Australian Prime Minister Anthony Albanese and the eSafety Commissioner have weighed in.
X’s response was to argue “Legacy Media Lies” which is a dismissive statement that speaks volumes about how seriously they take violence against women. Now Elon Musk has succumbed to pressure and announced measures to geoblock the creation of such images in jurisdictions where “it’s illegal” and restricted the feature to paying subscribers. Let that sink in. Their solution was to put sexual violence behind a paywall. Narinder Kaur articulated that this change would not stop the abuse but instead allow perpetrators to “monetise this feature”.
The uncomfortable truth is that while creating sexualised deepfakes of children is illegal virtually everywhere as child sexual abuse material, laws specifically addressing adult nonconsensual deepfakes vary widely. But what matters more than geography is it’s immoral everywhere. Creating highly sexualised images of real people without their consent, especially women and underage girls, is a violation of human dignity regardless of whether local laws have caught up with the technology.
Women have had enough. We’re tired of being told our bodies are public property. We’re tired of victim-blaming disguised as “free speech.” We’re tired of billionaires who claim ignorance while building tools that systematically harm us. We’re tired of having to explain that consent matters online and offline.
Governments must move faster. The “release first, ask questions later” approach to AI cannot stand when the cost is measured in trauma to women and girls. Platforms must be held accountable through meaningful penalties that actually hurt their bottom lines. And CEOs pulling the “I didn’t know” card need to understand it’s their job to know, and their responsibility to fix what they’ve broken.
Swedish Deputy Prime Minister Ebba Busch, who became a victim herself, said it clearly, “As a woman, I decide when, where and to whom I show myself in a bikini”. This is the baseline.
That in 2026 we’re fighting chatbots for the basic dignity of bodily autonomy tells you everything about how far we haven’t come. Women have had enough. When will everyone else catch up?


