Home » Technology » Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

Share This Post

Technology

Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

One victim alleges that explicit, AI-generated images of herself and at least 18 other minors were posted on Discord.

One victim alleges that explicit, AI-generated images of herself and at least 18 other minors were posted on Discord.

STK262_GROK_B_C
STK262_GROK_B_C
Emma Roth
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Three Tennessee teens are suing Elon Musk’s xAI over claims that the company’s Grok AI chatbot generated sexualized images and videos of themselves as minors, as reported earlier by The Washington Post. The proposed class action lawsuit, filed on Monday, accuses Musk and other xAI leaders of knowing that Grok would produce AI-generated child sexual abuse material (CSAM) when launching its “spicy mode” last year.

The plaintiffs include two minors and an adult who was underage when the events in the lawsuit took place. One of the victims, identified as “Jane Doe 1,” alleges that last December, she learned that explicit, AI-generated images of herself and at least 18 other minors were available on Discord. “At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses,” the lawsuit claims.

The perpetrator, who has since been arrested, allegedly used Jane Doe 1’s AI-generated CSAM “as a bartering tool in Telegram group chats with hundreds of other users, trading her CSAM files for sexually explicit content of other minors.” The lawsuit claims the perpetrator generated the explicit images of Jane Doe 1 and the two other victims using Grok. It also alleges that xAI “failed to test the safety of the features it developed” and that Grok is “defective in design.”

Musk and xAI became the subject of intense scrutiny after Grok flooded X with explicit images of adults and minors. The incident sparked a nationwide call for the Federal Trade Commission to investigate Grok, a probe from the European Union, and a warning from UK Prime Minister Keir Starmer. The Senate also passed a bill in January that would allow victims of nonconsensual deepfakes to sue the people who created the image, while the Take It Down Act, which President Donald Trump signed into law in 2025, will criminalize the distribution of nonconsensual, AI-generated deepfakes when it goes into effect in May.

Though X has tried making it harder for users to edit images with Grok, The Verge has found that it’s still possible to manipulate images uploaded to the platform. X has maintained that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” X didn’t immediately respond to The Verge’s request for comment.

“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators,” One of the victims’ lawyers, Annika K. Martin of Lieff Cabraser, said in a statement. “We intend to hold xAI accountable for every child they harmed in this way.”

The lawsuit seeks damages for victims impacted by Grok’s “illegal images.” It also asks the court to prevent xAI from generating and spreading alleged AI-generated CSAM.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Jess Weatherbed
Robert Hart
Jess Weatherbed

Most Popular

Share This Post

Leave a Reply