New California Bill Aims to Combat AI-Generated Child Sexual Abuse

Home  »  Sex Crimes   »   AI-Generated Child Sexual Abuse Targeted Under New California AB 1831 Law

AI-Generated Child Sexual Abuse Targeted Under New California AB 1831 Law

AI-Generated Child Sexual Abuse Targeted Under New California AB 1831 Law

Jan 17, 2024

A California lawmaker has introduced a bill aimed at a growing threat – the AI-generated depictions of child sexual abuse.

California Assembly Bill 1831 (AB 1831) was authored by Democratic Assemblymember Marc Berman who represents Silicon Valley’s Menlo Park. The legislation would update the state’s penal code to criminalize the production, distribution, or possession of AI-generated child sex abuse material (commonly known as image-based sexual abuse or IBSA). The bill would effectively outlaw such content, even if that material is fictitious, a report from Politico confirmed. Politico also reported that one of the organizations supporting California AB 1831 is Common Sense Media, a nonprofit founded by Jim Steyer that “for years has advocated for cyber protections for children and their privacy.”

Assemblymember Berman began working on the issue of AI child sex abuse material after being approached by law enforcement officials who informed him that “current law prohibiting the creation and possession of child pornography only applies to content of actual children, not fake images crafted using images scrubbed from the internet,” according to a report from the San Mateo Daily Journal.

“The legislation has the potential to open up a new avenue of complaints against social media companies, who are already battling criticisms that they don’t do enough to eradicate harmful material from their websites. It’s one of at least a dozen proposals California lawmakers will consider this year to set limits on artificial intelligence,” Politico said.

AB 1831’s introduction comes following the passage of similar legislation last year – California Assembly Bill 1394 – which is aimed at holding social media companies liable if they fail to combat child sexual abuse material or allow child sex trafficking to be committed by users on their platforms.

Signed by Governor Newsom in October 2023, AB 1394 allows officials to hold social media platforms such as TikTok, Instagram, Facebook, and Snap liable through potential civil litigation for “knowingly facilitating, aiding, or abetting commercial sexual exploitation,” according to the bill’s language.

Politico reported that AB 1394 was opposed by the following social media platforms, organizations, and businesses:

  • The California Chamber of Commerce
  • Technet
  • NetChoice
  • Google
  • Pinterest
  • TikTok
  • Meta (the parent company of Instagram and Facebook)

Those tech groups argued the law could inadvertently harm kids by creating a chilling effect in online spaces,” Politico said.

Unlike AB 1394, the new AB 1831 legislation does not directly target the social media platforms, but rather takes aim at creators and distributors of AI-generated child sex abuse material (CSAM). AI-generated CSAM is expected to increase exponentially as the technology becomes more sophisticated.

“In just one quarter last year, Meta sent 7.6 million reports of child sexual abuse material to the National Center for Missing and Exploited Children (NCMEC),” Politico confirmed.

How is AI-Generated Child Sex Abuse Material Produced?

Politico’s report noted that CSAM is created through digitally altering already existing images and videos depicting abuse.

AI-generated content depicting minors still relies on scraping information and images from real sexual abuse material and can lead to real-life abuse of children,” Assemblymember Berman said to the media outlet. “Some law enforcement agencies in California have already encountered the material,” Berman added, noting that authorities have to date been unable to prosecute criminals who create and distribute CSAM “because it is digitally-manufactured.”

“You could argue that every AI-generated image actually victimizes thousands of real children,” Berman told Politico. “Because they are a part of the formula that goes into creating that AI-generated image.”

Lawmakers Warn That AI-Generated Child Sex Abuse Material an ‘Imminent Threat’

In September 2023, attorneys general from every state warned of a “race against time” in curbing AI-generated child sex abuse material.

Accordingly, officials from around the country are taking steps in an effort to combat CSAM:

  • All 50 attorneys general issued a letter to Republican and Democrat leaders urging federal lawmakers to establish an expert commission to study the means and methods of AI that can be used to exploit children specifically.
  • The letter also urged expansion of existing restrictions on child sexual abuse materials specifically to cover AI-generated images.
  • The Senate Judiciary Committee recently subpoenaed the CEOs of X (formerly known as Twitter), Snap, and Discord to testify at an upcoming hearing on the sexual exploitation of children online.
  • New Mexico Attorney General Raúl Torrez recently sued Meta over claims Instagram and Facebook proactively served sexually explicit images to kids and allowed human trafficking of minors.

In addition to California, Pennsylvania and Oklahoma are currently considering bills related to AI-generated sexual exploitation.

Assemblymember Berman told Politico that there could be more action from the California Legislature in the future aimed at reducing online exploitation of children.

“The first step is we have to make sure that the images are illegal,” he said, adding that California needs to do “much more” to hold every actor accountable, including tech companies and platforms, Politico reported.

How Common is AI-Generated Child Sex Abuse Material?

How Common is AI-Generated Child Sex Abuse Material?

The Internet Watch Foundation has confirmed that “thousands of highly realistic explicit images of children are being produced using AI.” That substantial quantity of AI-generated CSAM was published on one dark web forum in only a one-month period, the San Mateo Daily Journal reported.

Legal Help for Victims of Image-Based Child Sexual Abuse Material (CSAM)

Dordulian Law Group (DLG) offers free and confidential consultations for survivors of AI-generated image-based child sexual abuse (IBSA) and child sexual abuse material (CSAM). Call a member of our dedicated team today at 866-GO-SEE-SAM to schedule a no-obligation case evaluation.

Our Sexual Assault Justice Experts are here to help survivors secure justice. Contact our top-rated attorneys online or by phone for a free consultation today.

DLG’s unique 24/7 network of support professionals, known as the SAJE Team (Sexual Abuse Justice Experts), is led by Sam Dordulian, a former sex crimes prosecutor in the Los Angeles District Attorney’s Office and member of RAINN’s National Leadership Council. Dordulian has more than 25 years of experience which has entailed securing over $100 million in settlements and verdicts for victims.


Go See Sam