UK to ban deepfake AI nudification apps as part of online-safety push
Government proposes new offences to prohibit creating or distributing nudifying AI tools to prevent online abuse of women and girls, with support from safety-tech firms.

The United Kingdom moved to ban deepfake-style nudification apps, proposing new offences to prohibit the creation and supply of AI tools that edit images to remove clothing. The government said the measures, announced on Thursday as part of a wider strategy to halve violence against women and girls, would criminalize both the production and distribution of nudifying software. Technology Secretary Liz Kendall said the aim is simple: women and girls deserve to be safe online as well as offline, and those who profit from or enable these apps will face the full force of the law. Existing rules already criminalize the non-consensual creation of sexually explicit deepfakes and intimate-image abuse under the Online Safety Act, but officials said the new offence would broaden the legal framework to cover the tools themselves and the services that run them.
Under the plan, ministers said they would join forces with technology companies to develop methods to combat intimate-image abuse. The government highlighted ongoing work with SafeToNet, a UK safety technology firm that has promoted AI software designed to identify and block sexual content and to block cameras when such content is detected. The approach would also rely on established filters used by platforms such as Meta to detect potential nudity in imagery in an effort to stop young people from taking or sharing intimate images. Nudification apps are designed to make it appear as if a person has been stripped of clothing in a photo or video, often with realistic results that can be weaponized against victims.
The crackdown arrives amid growing concern about the accessibility and misuse of nudification tools. In England, the Children’s Commissioner Dame Rachel de Souza, in a report issued earlier this year, urged a total ban on nudification apps, arguing that the technology enabling such images should be curtailed as well as the images themselves. Experts have warned that the availability of these apps could fuel coercive behaviour, harassment and grooming, and could contribute to child sexual abuse material (CSAM) if monetized or shared widely. The government said it would seek to outlaw AI tools that create or distribute CSAM and would move to make it impossible for children to take, share or view nude images on their phones.
Industry and safety charities welcomed the move with cautious optimism. Kerry Smith, chief executive of the Internet Watch Foundation, said the proposed nudification ban is a positive step and noted that “these apps have no reason to exist as a product” because they enable harm to vulnerable individuals and can fuel the darkest corners of the internet. She added that the measures should help reduce the spread and exploitation of intimate images involving young people. The NSPCC also welcomed the government’s focus on safeguarding, but its director of strategy, Dr. Maria Neophytou, said the government could go further by mandating device-level protections and by finding easier ways for tech firms to identify and prevent the spread of CSAM in private messages and on services. The charity argued for stronger safeguards across platforms to prevent abuse before it starts, including proactive detection and rapid removal of harmful content.
The government’s plan emphasizes a zero-tolerance stance toward AI-enabled sexual content that violates consent or invades privacy. Officials said the offences would apply to developers, distributors and platforms that enable nudification tools, and they stressed enforcement would be stringent. In addition to criminal penalties, authorities said they would pursue regulatory and civil action against companies that fail to comply with safety-by-design requirements or who profit from the distribution of such apps. The broader objective is to embed protections within the design of technologies that power image editing and generation, aligning with the Online Safety Act’s framework while extending it to new AI capabilities.
The announcements underscore the government’s intention to address misogyny and online abuse through a combination of legal reform and technological safeguards. By pairing criminal penalties with collaboration with industry, officials aim to create a deterrent effect and to make it more difficult for vulnerable individuals to be targeted by exploitative content. While the plan is rooted in protecting children and reducing harm, it also signals a recognition that the rapid development of AI tools requires proactive regulation to prevent misuse and to support responsible innovation. The policy received mixed reactions, with advocacy groups urging more comprehensive protections, but many agreeing that aggressive action against nudification apps is a necessary step in the broader effort to create a safer online environment for women and girls.
The measures announced on Thursday reflect a broader push within Technology & AI policy to regulate harmful outcomes while encouraging responsible use of powerful tools. If enacted, the nudification ban would add to an evolving legal landscape that already treats non-consensual deepfakes as criminal offences and seeks to reduce the non-consensual sharing of intimate imagery. As the government works with safety technology firms and platforms, observers will be watching how the new offences are defined, how penalties are applied, and how effectively enforcement can keep pace with rapid AI advancements while preserving innovation and user safety.