Technology

Teenage Girls Sue xAI, Claiming ‘Devastating’ Injuries From Grok AI’s Child Sexual Abuse Images

A new child sexual abuse lawsuit, filed Monday by three girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse materials with their faces and likenesses through its Grok AI technology.

“Their lives have been disrupted by the loss of privacy, dignity, and personal safety caused by the production and distribution of this CSAM,” the filing said. “xAI’s financial gains from the increased use of its image and video processing product have come at a cost and social cost.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated close-up images that are inconsistent, sometimes known as deepfake porn. Reports estimate that Grok users created 4.4 million “naked” or “nude” photos, 41% of the total number of photos created, in a nine-day period.

IX, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

A wave of “naked” photos sparked outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government officials have asked Apple and Google to remove the app from their app stores for violating their policies, but no government investigations into X or xAI have been opened. A similar, separate class action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanization trend has highlighted just how adept modern AI graphics tools are at creating content that looks realistic. The new complaint compares Grok’s self-proclaimed “bitter AI” and “dark arts” to simply subjecting children to “any position, even if it’s sick, even though it’s against the law.”

“To the viewer, the resulting video appears completely real. To the child, his or her identifying features will now be permanently attached to the video showing his or her child being sexually abused,” the complaint reads.

The AI ​​Atlas

The complaint alleges that xAI is at fault because it did not implement industry-standard safeguards that would have prevented abusers from making this content. It says xAI has licensed the use of its technology to third-party companies abroad, which sell subscriptions that have led abusers to create child sexual abuse images with victims’ faces and likenesses. The requests went through xAI’s servers, making the company liable, the complaint argues.

The case was filed by three Jane Does, pseudonyms given to teenagers to protect their identities. Jane Doe 1 was first made aware of the fact that abusive, AI-generated porn was being circulated on the web through an anonymous Instagram message in early December. The file claims to have been tipped off a Discord server by an anonymous Instagram user, where it was shared. That led Jane Doe 1 and her family, and ultimately law enforcement, to find and arrest one of the perpetrators.

An ongoing investigation led the families of Jane Does 2 and 3 to learn that their children’s photos had been altered by xAI tech into offensive material.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button