New Study Indicates Global Regulators 'Failing' to Combat Revenge Porn

Free Consultation
Home  »  Sex Crimes   »   Study: 58% of Women and Girls Have Experienced Online Harassment on Social Media Platforms

Study: 58% of Women and Girls Have Experienced Online Harassment on Social Media Platforms

Dec 14, 2023

The threat of online harassment via crimes such as revenge porn, deepfake pornography, image-based sexual abuse, and other forms of nonconsensual intimate content dissemination continues to increase with the expansion of generative AI (artificial intelligence).

A new report from the Center for European Policy Analysis (CEPA) indicates that one form of image-based sexual abuse in particular – revenge porn – is not being properly combatted by regulators around the world.

“Revenge porn represents a growing menace. Despite progress in enacting laws to combat the threat, more needs to be done,” the report noted.

Additionally, although ‘revenge porn’ tends to be used as a catch-all term, CEPA notes that it is often misleading given that, in many cases, private sexual images are disseminated without ever involving acts of revenge (or the pictures/videos containing what would be considered pornography).

Furthermore, CEPA’s report cites a recent study from the United Nations Educational, Scientific and Cultural Organization (UNESCO) which highlights the dangers of generative AI and a potential increase in victims impacted by what’s known as technology-facilitated gender-based violence (TFGBV).

What is Technology-Facilitated Gender-Based Violence (TFGBV)?

UNESCO defines technology-facilitated gender-based violence (TFGBV) as:

“… Any act that is committed or amplified using digital tools or technologies causing physical, sexual, psychological, social, political, or economic harm to women and girls because of their gender.”

“These forms of violence are part of a larger pattern of violence against women, occurring online and offline,” UNESCO said.

Forms of TFGBV may include:

  • Intimate image abuse
  • Doxing (revealing personal information)
  • Trolling (online harassment)
  • Sharing/disseminating of revenge porn or deepfake images<.a>

“It [TFGBV] also encompasses misogynistic hate speech and efforts to silence and discredit women online, including threats of offline violence,” UNESCO noted.

Perhaps the most troubling finding from the UNESCO study included statistical data from 2020 global estimates confirming that 58% of young women across the world have faced some form of gender-based violence on social media platforms.

What to Do if You’ve Been Victimized by Online Sexual Abuse or Harassment

Sexual assaults committed by way of online dating apps have increased significantly in recent years. Platforms such as Tinder now allow users to run criminal background checks on potential dates. As UNESCO’s study notes, however, safety features as well as apps specifically developed to help women be safer online “place an onus on the victim to protect themselves against online harms.”

Moreover, generative AI has the potential to expose unsuspecting users to new types of threats. By using AI, predators can create any number of dangerous scenarios:

  • The creation of more realistic fake media (often referred to as “hallucinations”)
  • Unintended biases in the platform outputs
  • Automated harassment campaigns
  • The ability to build “synthetic histories” (realistic false narratives)

“In addition, generative AI introduces the potential for unintended harms via embedded biases in the model training data,” UNESCO noted.

If you’ve been victimized by online sexual abuse or harassment, filing a police report as a first step is strongly recommended. However, many survivors of sexual assault – whether physical or online – may be hesitant to come forward and pursue a criminal complaint. Additionally, as Dordulian Law Group’s founder and top-rated revenge porn lawyer, Sam Dordulian, recently said on the Dr. Phil Show, many law enforcement officials are unfamiliar with how to handle these types of claims, especially given the proliferation of technology like generative AI.

Accordingly, filing a civil claim for damages with Dordulian Law Group’s (DLG) experienced Los Angeles, California, revenge porn and deepfake image dissemination lawyers can be a means of helping you secure the justice you deserve.

To speak with a member of our team regarding your image-based sexual abuse (IBSA) matter, contact DLG’s sex crimes lawyers today at 866-GO-SEE-SAM. We offer free, confidential, and no obligation consultations where you can discuss whatever type of case you may have – whether revenge porn, deepfake pornography, image-based sexual abuse, nonconsensual intimate video dissemination, etc.

Additional Takeaways From UNESCO’S Generative AI/Revenge Porn Study

UNESCO’s study confirms what many regulators – including here in the United States – have warned about since ChatGPT became a household name earlier this year: Curbing illicit AI-generated content like revenge porn must be addressed immediately.

In September, prosecutors from all 50 states issued a dire warning to Congress calling the threat of AI-generated child pornography a “race against time.”

Perpetrators often utilize specific tactics when creating both adult revenge porn and child image-based sexual abuse material:

  • Overlaying the face of one person on the body of another
  • Impacting previously unharmed individuals by depicting materials that swap their faces onto the faces of victims who were actually abused
  • Altering the likeness of a real person from something like a photograph taken from social media (so that it depicts abuse)

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

And the UNESCO study further echoes these calls for action to combat revenge porn and all forms of image-based sexual abuse. According to the study, generative AI opens victims up to potential “new harms,” but it can also lead to a proliferation in the number of attackers.

AI can allow perpetrators to conduct sustained and automated attacks by creating and disseminating content such as “posts, texts, and emails that are written convincingly from multiple ‘voices,'” the study said.

“This makes existing harms such as hate speech, cyber harassment, misinformation, and impersonation – all of which rank in the top five most common vectors of TFGBV – have a much wider reach and be more dangerous,” UNESCO added.

Furthermore, the study highlighted the potential for AI to generate “cyber-harassment templates” which bad actors may use to victimize people around the world.

Such technology can involve synthesizing fake pasts for people as well as modifying images to portray people in various non-consenting scenarios (which can be used to propagate some of the most common TFGBV harms today, such as impersonation, hacking and stalking, and cyber-harassment).

UNESCO’s study included specific “attack vectors” which may be used by online sexual predators:

  • TFGBV on social media commonly starts with cyber harassment (used 66% of the time as a tactic2), something that can be exacerbated with the help of AI-generated templates.
  • Text-to-image models can easily generate images of women in situations they did not consent to being in, thus creating a more realistic vector of image-based abuse.
  • Creating synthetic histories is a new vector of TFGBV harm (allowing attackers intending on spreading misinformation to use text-generative AI models to come up with convincing fake reports and histories that cast the target in a bad light, with the objective of casting doubt and defaming the individual). According to UNESCO, this is “one of the top methods of inflicting TFGBV today.”

“Combating TFGBV harms due to generative AI requires a combination of measures by both generative AI developers and the technology companies that platform them, focused actions by civil society organizations, regulation and policies by governments, and raising awareness at an individual level. It requires expansive education on media and information literacy, allowing individuals to critically examine and engage with the media they encounter and arm themselves with the knowledge needed to navigate this new world of generative AI,” UNESCO said.

California Revenge Porn and Image-Based Sexual Abuse Laws

In California, victims of image-based sexual abuse, revenge porn, deepfakes, and other forms of nonconsensual intimate picture or video dissemination have legal rights and avenues for pursuing justice through financial compensation in civil court.

Filing a civil claim against a perpetrator could be the first step in helping you:

  • Remove the nonconsensual content from any online platforms
  • Secure financial damages
  • Punish the defendant to the fullest extent of the law (via both civil and criminal court, depending on the specifics of the case)

Ready to file a claim and pursue justice through a financial damages award? Our expert attorneys are available online or by phone now.

To speak with a Glendale, California, revenge porn or image-based sexual abuse attorney, contact DLG today at 866-GO-SEE-SAM.

Author

Samuel Dordulian

Samuel Dordulian, founder

Sam Dordulian is an award-winning sexual abuse lawyer with over 25 years' experience helping survivors secure justice. As a former sex crimes prosecutor and Deputy District Attorney for L.A. County, he secured life sentences against countless sexual predators. Mr. Dordulian currently serves on the National Leadership Council for RAINN.




Go See Sam