British Tech Companies and Child Protection Agencies to Test AI's Ability to Create Abuse Content

Technology companies and child safety agencies will receive authority to evaluate whether AI tools can produce child abuse images under new British laws.

Significant Rise in AI-Generated Illegal Material

The announcement came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will permit designated AI companies and child protection organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from producing images of child sexual abuse.

"Ultimately about stopping exploitation before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly."

Addressing Legal Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This legislation is designed to preventing that problem by helping to halt the production of those materials at their origin.

Legislative Structure

The changes are being introduced by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or sharing AI systems developed to generate child sexual abuse material.

Practical Impact

This week, the minister toured the London headquarters of Childline and heard a mock-up conversation to counsellors featuring a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of themselves, created using AI.

"When I hear about children facing blackmail online, it is a source of intense anger in me and justified anger amongst families," he stated.

Alarming Data

A leading online safety organization reported that cases of AI-generated abuse content – such as webpages that may include numerous files – had more than doubled so far this year.

Cases of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are released," commented the head of the internet monitoring organization.

"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the ability to make possibly limitless quantities of advanced, photorealistic exploitative content," she added. "Content which further exploits survivors' trauma, and makes children, particularly female children, less safe both online and offline."

Support Interaction Information

Childline also published information of support sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:

  • Using AI to evaluate body size, body and appearance
  • Chatbots dissuading young people from consulting safe guardians about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated images

Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using chatbots for assistance and AI therapy apps.

Mark Sanchez
Mark Sanchez

A passionate writer and tech enthusiast who loves sharing insights to help others navigate modern challenges.