Elon Musk’s AI Tool Accused of Generating Explicit Taylor Swift Videos Without Prompting

Written by Were Kelly

Elon Musk’s AI video generator, Grok Imagine, is facing backlash after allegations it produced sexually explicit deepfake videos of Taylor Swift without users requesting such content. Experts say the case highlights systemic misogyny in AI design and the urgent need for stronger safeguards.

Clare McGlynn, a Durham University law professor who has helped draft UK legislation to criminalize pornographic deepfakes, told the BBC:

“This is not misogyny by accident, it is by design. Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to.”

According to a report by The Verge, the AI tool’s new “spicy” mode “didn’t hesitate to spit out fully uncensored topless videos” of the pop star — without any explicit prompt from the user. The report also said the platform lacked proper age verification, despite UK laws introduced in July requiring such checks for explicit content.

Grok Imagine is operated by XAI, a company founded by Musk. Its own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner.” The allegations suggest these rules are not being enforced. XAI has been approached for comment but has yet to respond.

The controversy comes just months after sexually explicit Taylor Swift deepfakes went viral in January 2024, garnering millions of views on X and Telegram before being removed.

Deepfakes are AI-generated images or videos that replace one person’s face with another’s. When used for sexual purposes without consent, they are increasingly being recognized as a form of abuse.

Testing Grok Imagine’s safety filters, The Verge journalist Jess Weatherbed entered the innocent prompt: “Taylor Swift celebrating Coachella with the boys.” The AI generated still images of Swift in a dress with men in the background, then offered options to animate the scene in “normal,” “fun,” “custom,” or “spicy” modes.

After selecting “spicy,” Weatherbed said the output escalated instantly.

“She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing — completely uncensored, completely exposed. I in no way asked it to remove her clothing.”

Similar testing by Gizmodo reportedly produced explicit depictions of other famous women. Some searches, however, returned blurred or moderated videos. The BBC has not independently verified the AI’s output.

Weatherbed accessed the tool’s paid version, costing £30, using a new Apple account. The only age check was entering a date of birth — a far cry from the “technically accurate, robust, reliable and fair” verification methods now required by UK law.

Under the new regulations, sites offering explicit content — including those with generative AI tools — must confirm users’ ages with stricter methods, such as ID checks. Ofcom, the UK media regulator, told the BBC it is “aware of the increasing and fast-developing risk GenAI tools may pose… especially to children” and is working to ensure platforms have “appropriate safeguards.”

Currently, UK law criminalizes creating pornographic deepfakes when they depict children or are used in “revenge porn.” But these protections do not yet cover all non-consensual explicit deepfakes.

Professor McGlynn helped draft an amendment that would make it illegal to create or request any non-consensual pornographic deepfake. The government has committed to passing it, but the change is not yet in effect.

Baroness Owen, who proposed the amendment in the House of Lords, said:

“Every woman should have the right to choose who owns intimate images of her. It is essential that these models are not used in such a way that violates a woman’s right to consent, whether she be a celebrity or not. This case is a clear example of why the Government must not delay.”

A Ministry of Justice spokesperson added:

“Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society, which is why we have passed legislation to ban their creation as quickly as possible.”

When explicit Taylor Swift deepfakes went viral in early 2024, X temporarily blocked searches for her name and claimed to be “actively removing” the images and taking “appropriate actions” against accounts sharing them.

Weatherbed told the BBC her team chose Swift to test Grok Imagine’s limits because of that incident.

“We assumed — wrongly now — that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, she would be first on the list.”

Swift’s representatives have been contacted for comment but have not yet responded.

The incident raises fresh questions about AI companies’ responsibilities to prevent harm, particularly when it comes to protecting women from image-based abuse — and whether profit and speed-to-market are being prioritized over safety.

SOURCE: REUTERS