Rape-fantasy rooms, deepfakes and bots: how chat apps and AI are widening online sexual violence
Investigations and activists say messaging platforms, AI ‘undress’ tools and copycat sites are enabling coordinated abuse and circulation of nonconsensual sexual imagery across borders.

For survivors, campaigners and police, a single throughline has emerged from multiple investigations in recent years: unmoderated chat rooms, encrypted messaging apps and increasingly accessible artificial intelligence tools have combined to make nonconsensual sexual imagery and coordinated sexual violence easier to produce, distribute and hide.
High-profile criminal cases and undercover reporting have exposed a range of online communities where men share images and videos of women without consent, trade instructions for sedating and assaulting relatives, and use AI to fabricate pornographic material. In France, the trial of Dominique Pelicot — a retired electrician accused of inviting dozens of men to rape his drugged wife over many years — drew global attention to a forum called Coco and a subforum labelled "without her knowledge." Pelicot was convicted and sentenced to 20 years in prison, and French authorities shut down the site; its founder, identified in reporting as a software engineer, was arrested in January.
Investigations by journalists and activists show Coco has been replaced in some corners of the web by copycat services such as Bounty.chat, which advertises a range of fetishes including candaulism, a practice in which a person exposes images of a partner to others. Activists and police say those platforms, alongside large encrypted networks, enable the rapid reappearance of illicit groups after takedowns.
Encrypted messaging app Telegram has been central to many of the revelations. Undercover reporters in Germany said they infiltrated multiple "rape chats" where users exchanged tips on how to sedate and sexually assault women in their households and circulated images and live footage of attacks. One German broadcaster reported that members shared instructions on drugs and links to shops that sold sedatives disguised as innocuous products. When investigators and platform moderators shut down groups, members routinely migrated to new channels or used invitation links and bots to rebuild communities.
The scale is large and international. Activists in Serbia reported Telegram groups with tens of thousands of members where users shared intimate photos of women — sometimes family members — and sold content tagged as underage. The Cambridge-based Internet Watch Foundation said it has flagged thousands of examples of child sexual abuse imagery on Telegram since 2022, including material involving very young children. In China and South Korea, journalists uncovered hundreds of Telegram groups and chat rooms where secretly filmed images and deepfake pornography were shared; in South Korea, reporting found that students and young men used images from social media and school yearbooks to generate AI-manipulated material and circulate it in so-called "Humiliation Rooms."
Those digital manipulations are powered by a growing market of generative-AI tools. Reporters and researchers have found "one-click" undress apps, face-swapping services and voice-cloning tools that can produce hyperrealistic audio and video deepfakes on a consumer's smartphone. Investigations noted bots on messaging platforms that generate explicit images automatically; one such bot reportedly amassed hundreds of thousands of subscribers and offered promotional deals to produce AI-generated nudes.
The consequences for victims are compounding. Organizations that help people remove nonconsensual imagery say the material is notoriously hard to eradicate once it spreads. The U.K. Revenge Porn Helpline reported it helped remove tens of thousands of explicit images in a year, but a large share of previously reported photographs continued to circulate. The helpline and campaigners describe a phenomenon of re-victimization: images re-emerge on new services and are recirculated by networks that rebuild after takedowns.
Regulators and law enforcement have taken varied approaches. French prosecutors arrested Telegram’s Russian-born founder on allegations of allowing criminal activity on the app, and he was released on bail pending prosecution. Ofcom, the U.K. communications regulator, is investigating multiple online services for alleged failures to protect users from illegal content under rules that came into force in March. Those regulations carry potential fines and obligations to comply with information requests; they apply to services that have a significant U.K. audience regardless of where they are based. In Italy, a pornographic site said to host doctored images of public figures was taken down after widespread public condemnation.
Platform companies and messaging services have pledged to act. Telegram said it would block users found to be abusing the service and remove groups that facilitate criminal activity, and it has intervened in response to specific investigations. But platform measures have not always ended the flow of material. When groups are deleted, members often share new links or redeploy automated tools to recreate the rooms. Experts and campaigners say enforcement is made more difficult by cross-border legal hurdles, encrypted channels and the rapid pace of new AI services.
Civil society groups say the technical and legal gaps are widening the power imbalance between perpetrators and victims. Campaigners who infiltrated groups, and victims who spoke publicly, described administrators who sold intimate images, labeled minors with shorthand tags for sale, and publicly celebrated abuse. In some documented cases, footage of assaults was posted without the victim's knowledge and later spread to commercial porn sites. Deepfake producers have used images scraped from social media, workplace profiles and school databases to fabricate explicit material, blurring the line between image theft and synthetic production.
Lawmakers and regulators are under pressure to update statutes and enforcement mechanisms. Some jurisdictions already criminalize the sharing or threatening to share deepfake images without consent; others are pursuing platform liability, higher fines, or criminal charges for company executives who fail to cooperate with investigations. Advocates emphasize the need for faster removal processes, cross-border collaboration among law enforcement agencies, stronger obligations on app stores and payments providers, and technical measures to detect manipulated media.
Technology researchers warn that as generative AI becomes more capable and more widely distributed, the volume and realism of nonconsensual sexual material could rise. A U.K. government estimate cited by campaigners projected a sharp increase in doctored photos being shared globally in coming years. At the same time, digital-forensics advances and watermarking initiatives offer potential tools for verification and takedown, though their deployment and adoption remain uneven.
Survivors and campaigners stress that the harms extend beyond images. Public campaigns and prosecutions, they say, must address the social and criminal ecosystems that normalize and traffic in sexual violence online. "Women are literally not safe anywhere," one woman who uncovered groups sharing images of family members told reporters. Activists who exposed large-scale groups said many victims feel intimidated or trapped from pressing charges, underscoring the challenge of translating exposure into prosecutions.
Investigations by journalists, law enforcement actions and regulatory probes have prompted platform responses and some takedowns, but experts say the underlying drivers — encrypted, portable distribution channels and the democratization of AI — will require a mix of faster technical countermeasures, tighter legal frameworks and sustained international cooperation to curb the spread of abusive content online.