express gazette logo
The Express Gazette
Wednesday, December 31, 2025

Encrypted chats and consumer AI broaden reach of non‑consensual sexual content worldwide

Investigations and prosecutions expose networks that trade filmed assaults, stolen images and AI‑generated pornography as regulators and platforms scramble to respond

Technology & AI 4 months ago
Encrypted chats and consumer AI broaden reach of non‑consensual sexual content worldwide

Investigations across Europe and Asia, court proceedings and platform probes have revealed a sprawling ecosystem of encrypted chat rooms, file‑sharing channels and consumer AI tools that facilitate the creation, trade and dissemination of non‑consensual sexual imagery and videos.

Authorities in France have pointed to one case as emblematic. In Mazan, a public trial last year convicted 72‑year‑old Dominique Pelicot and dozens of other men after prosecutors said Pelicot drugged and invited strangers to rape his wife over an extended period. Pelicot was sentenced to 20 years in prison. The forums that hosted sexualised discussion and coordination — including a chatroom known as Coco and a subforum labelled “without her knowledge” — were shut down, and the site’s founder, software engineer Isaac Steidl, was arrested earlier this year. Copycat sites such as Bounty.chat have since appeared and are the subject of fresh inquiries by French authorities.

Beyond that case, journalists, activists and law‑enforcement investigators have documented large, often anonymous groups on instant‑messaging platforms where intimate and explicit images are shared without consent. Serbian activist Stasa Ivkovic reported discovering Telegram communities with tens of thousands of members where administrators accepted submissions — including images of underage girls — sold access to paying users and later released material more widely after monetising it. Investigative teams in Germany and elsewhere infiltrated groups where members reportedly exchanged instructions on how to sedate and sexually assault women in their households, along with live footage and images of incapacitated victims.

The Cambridge‑based Internet Watch Foundation said it has flagged thousands of examples of child sexual abuse imagery on Telegram since 2022, including material involving children as young as two. Telegram’s founder, Pavel Durov, was arrested by French police last summer and charged with allegedly allowing criminal activity on the app; he has been released on bail. Telegram has repeatedly said it will act against groups that misuse its platform, but journalists and NGOs say when individual groups are shut down, users often migrate to new channels or copies of the same groups.

Consumer‑grade artificial intelligence and automations have compounded the problem by lowering the technical barriers to producing realistic fakes. Journalists and researchers have uncovered hundreds of services, some unregulated and unregistered, that offer “one‑click” undress apps, instant face‑swap tools and voice‑cloning software. CNN and other outlets reported Telegram chats where users in China shared deepfake pornography created from photos of colleagues taken from social media. A voice‑cloning app described in reporting can create hyperrealistic audio imitations, and bots within messaging apps have been used to generate AI nudes in seconds.

The spread of affordable generative tools has been tied to national scandals. In South Korea, reporting last year documented networks of students and others who mined social media and school yearbooks to produce sexually explicit deepfakes, circulating them in so‑called “Humiliation Rooms.” Journalists who exposed the operations said rooms sometimes targeted the reporters themselves. In Italy, a platform known as Phica amassed hundreds of thousands of subscribers before regulators and hosting providers took it down after doctored images of public figures and private women appeared on the site.

Support and removal services report growing demand. The U.K.’s Revenge Porn Helpline said it helped remove about 81,000 sexually explicit images last year but that more than 61,000 previously reported images continued to circulate. The helpline reported a 260 percent increase in re‑victimisation in 2024. Separately, researchers monitoring deepfake and child‑abuse images say the volume of doctored photographs and other illicit material is rising rapidly; a U.K. government estimate cited in recent reporting projected the number of doctored photos shared in 2025 could reach about eight million, up from roughly 500,000 in 2023.

Regulators have introduced new enforcement tools but face legal and technical limits. Ofcom, the U.K.’s communications regulator, opened investigations into multiple online services earlier this year for suspected failures to protect users from illegal content under recently enacted duties. The rules allow penalties of up to £18 million or 10 percent of qualifying worldwide revenue for serious breaches, and criminal sanctions for senior managers who do not comply with information requests. National police units in several countries have launched investigations into both operators of illicit groups and into intermediaries that appear to facilitate distribution.

Prosecutors and non‑profit groups stress that removal and takedown alone do not erase harm, because images and videos can be copied, rehosted and repurposed across jurisdictions in ways that make permanent deletion difficult. At the same time, law‑enforcement officials and platform moderators confront a global and rapidly shifting landscape in which encrypted messaging, ephemeral hosting and consumer AI tools enable reuse of illicit material. Officials in multiple countries say they continue to pursue criminal cases, platform take‑downs and cross‑border cooperation to address both the trafficking of real sexual abuse imagery and the rise of AI‑generated pornography produced without consent.

The convergence of encrypted chat networks and easy‑to‑use generative AI has drawn increased attention from courts, regulators and civil‑society groups worldwide. Investigations and prosecutions in France, Germany, Italy, South Korea and elsewhere illustrate a range of enforcement approaches, but authorities and victim‑support organisations say the volume and mobility of content remain major obstacles to preventing re‑victimisation and prosecuting creators and distributors of non‑consensual sexual imagery.


Sources