Images depicting women account for the vast majority of AI-generated NSFW images detected online, according to data from deepfake-detection firm TruthScan. The images are rarely consensual, often indistinguishable from real photographs and increasingly easy to produce.
Two years ago, fabricated NSFW images of a famous pop star began circulating widely on social media. Though convincing, the images were fake—generated using artificial intelligence trained on real photographs of the singer. They spread faster than platforms could remove them.
The incident was not an anomaly. While celebrities are frequent targets, researchers say the broader impact is being felt by everyday women whose likenesses are used to create images without their consent.
Independent reporting has since confirmed the scope of the issue. A Guardian investigation found that nearly 4,000 celebrities had been targeted by deepfake pornography, with women making up the overwhelming majority of victims. Advocacy groups warn that the real number—once private individuals are included—is likely far higher.
According to Christian Perry, CEO of TruthScan, one increasingly common vector is dating-app impersonation. Scammers use real photos of women to create fake profiles, then generate additional images—sometimes explicit—to extract money from unsuspecting users.
“In some cases, the scammer will create synthetic images of the woman they’re impersonating and send them to a ton of people,” Perry said. “The critical point is really that these images are being created using someone’s likeness without their knowledge or consent, and actually causing harm in many cases.”
Researchers at TruthScan believe women may be disproportionately affected due to a combination of demand and data availability. Content featuring women vastly outnumbers other categories online, providing more material for AI systems to learn from and replicate. The result is a feedback loop: more data leads to more convincing fakes, which in turn increases volume and reach.
Experts caution against viewing this as a technical problem alone. While detection tools and platform policies are improving, victims often face delays in takedowns, limited legal recourse and lasting reputational harm.
The issue is drawing renewed attention as lawmakers debate federal protections against AI-generated deepfakes and platforms face pressure to speed removals. As image tools become cheaper and more realistic, regulators and courts are increasingly being asked to decide whether “synthetic” harm still counts as real harm.
The information provided in this article is for general informational and educational purposes only. It is not intended as legal or professional advice. Readers should not rely solely on the content of this article and are encouraged to seek professional advice tailored to their specific circumstances. We disclaim any liability for any loss or damage arising directly or indirectly from the use of, or reliance on, the information presented.
