A.P. Report: AI Race Could Increase Deepfake Porn Problem

Home  »  Sex Crimes   »   Nonconsensual Deepfake Porn Expected to Increase With AI Popularity

Nonconsensual Deepfake Porn Expected to Increase With AI Popularity

Nonconsensual Deepfake Porn Expected to Increase With AI Popularity

Apr 19, 2023

Revenge porn is a topic we’ve covered extensively on the Dordulian Law Group blog. Our founder and president, Sam Dordulian, recently appeared on the Dr. Phil Show to discuss the issue and offer guidance for survivors on how to obtain justice through civil litigation.

AI Race Could Increase Deepfake Porn Problem

As a new report from the Associated Press (AP) notes, while artificial intelligence is currently all the rage, a “darker side” of the easily accessible tool exists. While creating art, designing advertising campaigns, and even writing content like what you’re reading now can be done via such machine learning, nonconsensual deepfake pornography is an issue that could explode as a result of the artificial intelligence revolution.

Deepfakes refer to videos and images created digitally or altered using artificial intelligence (AI) or machine learning. The first porn made with the technology spread across the web several years ago, when a Reddit member shared clips of the faces of women celebrities superimposed on the shoulders or porn actors, according to the AP.

“Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images – essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners,” the AP said.

The AP report pointed to specific issues of concern with AI and deepfakes:

  • The technology makes it easier to create sophisticated and visually compelling deepfakes.
  • The problem could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

“The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse, told the AP. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

Can Anyone’s Internet Image be Turned Into Deepfake Nonconsensual Pornography?

Can Anyone's Internet Image be Turned Into Deepfake Nonconsensual Pornography?
The AP’s report included a case involving a 28-year-old woman from Australia – Noelle Martin. 10 years ago, she used Google to search an image of herself out of curiosity and discovered an unexpected reality – her picture had been turned into deepfake porn.

Martin told the AP that, to this day, she doesn’t know who created the fake images or videos of her engaging in sexual intercourse. The AP said she suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

Martin was horrified and contacted several websites over a period of time to try to remove the images. Some didn’t respond. Some took it down, but she quickly found it again, according to the AP.

“You cannot win,” Martin said to the Associated Press. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

“The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment – essentially blaming her for the images instead of the creators,” the AP reported.

Martin began working on legislation in Australia that created a national law to fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for deepfake porn content from online safety regulators.

But the AP notes that “governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world.” Martin, who is now an attorney and legal researcher at the University of Western Australia, told the AP that she believes the problem has to be controlled through “some sort of global solution.”

How are AI Models Addressing Deepfake Porn and Explicit Images?

The AP cited a number of platforms and offered details on what they are doing at an individual level to eradicate nonconsensual deepfake porn and sexually explicit images.

  • Open AI: OpenAI claims to have removed explicit content in data used to train DALL-E’s image generator, limiting the ability of users of the tool to create these types of images. The company filters requests, and claims to block users from creating AI pictures of politicians and celebrities.
  • Midjourney: Another model, Midjourney, encourages users to flag problematic images to the moderators. The platform also blocks use of certain keywords.
  • Stability AI: In November, Stability AI released an update that removed the ability to create explicit pictures using its image generator Stable Diffusion. These changes were made after reports that users were creating nude pictures of celebrities using the technology.
  • TikTok: TikTok announced last month that all deepfakes, or manipulated content, that shows realistic scenes must be labelled to indicate that they’re fakes or altered in some manner. Deepfakes of young people and private figures are no longer permitted. The company previously banned sexually explicit content, as well as deepfakes which misled viewers and caused harm.
  • Twitch: Twitch updated its policies on explicit deepfake pictures after a popular streamer, Atrioc, was found to have an open deepfake website in his browser during a late-January livestream. The site featured phony pictures of Twitch streamers. The platform already prohibited explicit deepfakes, but now if a user so much as shows a glimpse of such content – even if simply intended to express outrage – the image(s) “will be removed and will result in an enforcement,” the company wrote in a blog post.
  • Apple/Google: Apple and Google removed an app that ran sexually suggestive deepfake video of actresses in order to market the product. Deepfake porn research is rare, but a report by DeepTrace Labs in 2019 found that it was almost exclusively weaponized against women, and that the most targeted individuals included western actresses.
  • Meta: Take It Down is an online tool that allows teens to report images and videos on the internet. In February, Meta, along with adult sites such as OnlyFans and Pornhub, joined the initiative. The reporting site is for both regular images and AI-generated content, which has become an increasing concern for child safety organizations.

How California Offers Justice for Revenge Porn Survivors

Revenge porn refers to the sharing or distribution of sexually explicit videos or images of an individual without their consent. The crime of revenge porn can be committed whether the images or videos were taken when the victim and the offender were intimately acquainted, or if the offender took photos/videos without the victim’s consent. If someone distributes intimate photos or videos without your consent, it’s illegal and grounds for a lawsuit to recover financial damages.

California was the first state to outlaw revenge porn back in 2013. That year, legislation made the sharing of nonconsensual intimate images illegal. The current penalty for the first time offenders of revenge porn is six months in prison.

To secure financial compensation in a civil lawsuit, victims of revenge pornography must come forward and file a claim with Dordulian Law Group. A revenge porn civil lawsuit may lead to a cash settlement which includes damages for things like emotional trauma, loss of earning capacity, diminished life quality, and more.

Contact California’s Best Revenge Porn Attorneys for a Free Consultation

For a free and confidential consultation regarding your deepfake or revenge porn case, contact a member of the Dordulian Law Group (DLG) team today by calling 866-GO-SEE-SAM.

DLG’s unique 24/7 network of support professionals, known as the SAJE Team (Sexual Assault justice experts), was created by Sam Dordulian, a former sex crimes prosecutor in the Los Angeles District Attorney’s Office and member of RAINN’s National Leadership Council.

The DLG SAJE Team is available 24/7 to answer any questions you may have and begin the process of filing your revenge porn claim in an effort to recover maximum financial damages on your behalf. Our revenge porn attorneys will listen to the facts of your case, conduct a thorough investigation, and give you an estimated timeline for how long your case may take to settle.

We have secured countless multimillion dollar settlements for sexual assault cases on behalf of our clients:

  • A $2,250,000.00 settlement for a survivor raped by a rideshare driver
  • A confidential multi-million dollar settlement for a client who was raped by a man she met on a ‘Sugar Daddy’ website
  • A $2,000,000.00 child sexual abuse settlement under California AB 218
  • A confidential maximum financial settlement for a women assaulted by an employer – although the incident involved minimal contact, our attorneys were able to secure the damages award under the eggshell plaintiff rule

Ready to file a claim and pursue justice through a financial damages award? Our expert attorneys are available online or by phone now.

Sam Dordulian and his team of Los Angeles, California, revenge porn and nonconsensual deepfake pornography lawyers are dedicated to helping survivors obtain the justice they deserve through maximum financial damages awards.

Contact us today at 866-GO-SEE-SAM to get justice for your deepfake or revenge porn case.


Go See Sam