Skip to main content

From Reaction to Prevention in AI Child Safety

Blog Index
From Reaction to Prevention in AI Child Safety
By Marzia Zunino
Posted: 2025-11-17T20:26:54Z

When the Safeguard Becomes the Sentinel: How the UK Is Shifting from Reacting to AI-Powered Child Exploitation to Pre-Emptive Justice 


London, 12 November 2025 - The United Kingdom is preparing to take an unprecedented step in the fight against digital child exploitation. Under a proposed amendment to the Crime and Policing Bill, child-protection groups and approved technology organizations would be allowed to test artificial intelligence (AI) models before they are released to the public, a move intended to prevent these systems from generating child sexual abuse material (CSAM). 


Until now, the law has left a dangerous gap. Because creating or possessing CSAM is a criminal offence, developers and watchdogs such as the Internet Watch Foundation (IWF) have been unable to safely examine AI tools for weaknesses without technically breaking the law themselves. This meant that harmful material could only be detected and removed after it was already circulating online. The proposed change would overturn that limitation, allowing experts to test and strengthen AI systems from the start, making sure that safety is built in rather than constructed. Technology Secretary Liz Kendall called the measure a major step forward in safeguarding children in the digital age, stressing that child protection must be “designed into AI systems, not bolted on as an afterthought.”1 Designated organizations, including the IWF and approved AI developers, would be authorized to test whether models have safeguards against producing sexual abuse imagery, non-consensual intimate content, or extreme pornography. 


This urgency is also backed by data. According to the IWF, reports of AI-generated child sexual abuse material have more than doubled in the past year, from 199 cases between January and October 2024 to 426 in the same period of 2025.2 The nature of this content has grown more severe: “Category A” material, which involves penetrative or sadistic acts, now represents more than half of all confirmed cases.3 Analysts warn that many AI-generated images are now nearly indistinguishable from real photographs, blurring the line between simulation and crime. 


Recent investigations show how quickly the technology can be exploited. In August, the IWF discovered a hidden chatbot website that enabled users to generate sexualized images of children simply through written prompts.4 Other cases revealed deepfake-style videos of minors being shared across encrypted platforms, made by using commercial image tools never intended for illegal purposes.5 These developments have placed the justice system in unfamiliar territory. Courts must now consider what it means to protect a victim who doesn’t exist, when the image is synthetic, yet the harm, intent, and exploitation are very real. For members of the judiciary, this legal and ethical shift raises important questions about how to define accountability in an era when technology can make evidence of abuse. 


Child-protection advocates generally support the proposal, though they warn it must go further. Rani Govender, policy manager at the NSPCC, said the government must make testing mandatory, not voluntary, to ensure that “safeguarding is a foundational part of product design.”6 The government has pledged to form an expert advisory group to oversee testing and to ensure that sensitive data is handled securely.7 As Kerry Smith, chief executive of the IWF, put it: “AI tools have made it possible for survivors to be victimized all over again with just a few clicks. The only solution is to make safety intrinsic to the technology itself.”8 While the UK’s initiative may be among the first of its kind, its influence could extend far beyond its borders. In an age when AI can replicate, distort, and weaponize imagery at scale, the country’s move represents something larger than policy, it’s a declaration that justice must evolve as fast as the technology it seeks to control. 



Endnotes 


  1. Department for Science, Innovation and Technology. 2025. New Law to Tackle AI Child Abuse Images at Source as Reports More Than Double, November 12. GOV.UK. “By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.” 
  2. Internet Watch Foundation, Annual Data Report 2025. 
  3. UK Government, Press release: New law to tackle AI child abuse images at source, Nov. 2025. 
  4. Internet Watch Foundation, “Disturbing AI-generated child sexual abuse images found on hidden chatbot website”, Aug. 2025. 
  5. The Guardian, “Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images”, Nov. 2025. 
  6. Rani Govender, policy manager for child safety online, NSPCC, as quoted in Law change set to allow AI testing to prevent creation of child sex abuse images, The Independent, Nov. 12, 2025. 
  7. Crime and Policing Bill Factsheet, GOV.UK, 2025. 
  8. Sky News, “New law could help tackle AI-generated child abuse at source, says watchdog”, Nov. 2025.