Home Newsbeat Stripped by AI: Inside Grok’s Ethical Meltdown on X

Stripped by AI: Inside Grok’s Ethical Meltdown on X

Written by Lisa Murimi

A disturbing trend has emerged among Kenyan users of Grok, the generative AI chatbot developed by Elon Musk’s xAI, raising serious ethical and privacy concerns. 

Grok, integrated into the X platform, allows users to generate images from text prompts. 

While intended for creative and constructive purposes, some users have exploited this technology to create and share explicit and non-consensual images of individuals, particularly women.

One such incident involved a Kenyan user prompting Grok to alter a publicly shared photo of a woman wearing a headscarf and sunglasses. 

The user requested Grok to remove the woman’s sunglasses, then proceeded with further prompts that resulted in the generation of explicit and inappropriate images. 

These images were subsequently shared on X, sparking outrage among users.

Critics argue that this misuse of AI technology constitutes a violation of privacy and dignity. One user expressed her disgust, saying, 

“Y’all utilizing grok badly but also I’m so ashamed that y’all actually find this funny. Using AI to strip clothes off someone isn’t curiosity, it’s violation. If that’s your idea of fun, you need more therapy than tech!”

Another unimpressed Kenyan wrote: “We’re here fighting femicide, rape culture and now we have to fight f****ng AI because men are asking Grok to undress women for fun. It’s men. Always men. Men violating, men laughing, men hiding behind machines to dehumanize us. It’s men, it’s men, it’s MEN.”

Some questioned the legality of such actions, stating, “Is there legal action that can be taken for this? This is extremely gross.”

Grok’s design, which includes modes like “Sexy” and “Unhinged,” has been criticized for lacking adequate safeguards against the generation of explicit content. 

Experts have raised concerns that Grok’s minimal content moderation allows for the creation of harmful and misleading images, including deepfakes and explicit content involving public figures.

In response to these issues, xAI has initiated efforts to improve Grok’s safety features, including the hiring of a “red team” to identify and mitigate misuse . However, the effectiveness of these measures remains a topic of debate.