X’s built-in artificial intelligence chatbot Grok is facing international scrutiny after generating sexually explicit images of real people without consent, including alleged cases involving children, triggering regulatory concern across multiple countries, a Reuters investigation has found.

The controversy surfaced after Julie Yukari, a 31-year-old musician based in Rio de Janeiro, posted a New Year’s Eve photo of herself in a red dress with her black cat on X. Within hours, users began prompting Grok to digitally alter the image to depict her in a bikini. Grok complied, and near-nude AI-generated images of Yukari soon circulated on the Elon Musk-owned platform.

“I was naive,” Yukari told Reuters, saying she never expected the bot to comply with such requests.

Reuters identified numerous similar cases across X, including instances where Grok appeared to generate sexualised images of children. X did not respond to Reuters’ request for comment. Earlier, xAI — which owns Grok — dismissed reports of sexualised images of children circulating on the platform, calling them “Legacy Media Lies.”

The spread of the images has prompted swift international reaction. French ministers said they had reported X to prosecutors and regulators, describing the content as “sexual and sexist” and “manifestly illegal.” India’s IT ministry said in a letter to X’s local office that the platform failed to prevent the misuse of Grok to generate obscene and sexually explicit content. US regulators declined to comment.

According to Reuters, the surge in AI-generated “digital undressing” began in recent days. A review of public Grok requests during a 10-minute window on Friday counted more than 100 attempts to alter photos so subjects appeared in bikinis, mostly targeting young women.

In at least 21 cases, Grok fully complied, producing images depicting women in highly revealing or translucent bikinis. In several others, it partially complied by removing outer clothing. The identities and ages of most of those targeted could not be verified.

Experts say X’s integration of image editing into a mainstream social platform has dramatically lowered barriers to abuse. “In August, we warned that xAI’s image generation was essentially a nudification tool waiting to be weaponised,” said Tyler Johnston, executive director of AI watchdog The Midas Project.

Dani Pinter, chief legal officer of the National Center on Sexual Exploitation’s Law Center, said X failed to remove abusive material from its AI training data and should have blocked users from requesting illegal content. “This was entirely predictable and avoidable,” she said.

As backlash mounted, Musk appeared to make light of the controversy, responding with humorous reactions to AI-edited images of public figures, including himself.

For Yukari, the damage was personal. After she protested on X, copycat users requested even more explicit AI-generated images. “The New Year has turned out to begin with me wanting to hide from everyone’s eyes,” she said, “and feeling shame for a body that is not even mine, since it was generated by AI.”