British Technology Firms and Child Safety Agencies to Test AI's Capability to Create Exploitation Images
Tech firms and child protection organizations will be granted permission to assess whether AI systems can produce child abuse material under recently introduced British laws.
Significant Rise in AI-Generated Harmful Content
The declaration came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the government will permit designated AI developers and child protection groups to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to prevent them from producing images of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the danger in AI systems early."
Addressing Legal Obstacles
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to preventing that problem by helping to halt the creation of those images at source.
Legal Framework
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI models designed to create child sexual abuse material.
Real-World Impact
This week, the official toured the London base of Childline and heard a simulated conversation to advisors involving a report of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.
"When I learn about young people facing extortion online, it is a source of extreme anger in me and rightful concern amongst parents," he stated.
Alarming Data
A prominent internet monitoring foundation stated that cases of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.
Instances of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to ensure AI products are safe before they are launched," commented the chief executive of the internet monitoring organization.
"AI tools have enabled so survivors can be targeted all over again with just a simple actions, giving criminals the ability to create potentially endless quantities of sophisticated, lifelike exploitative content," she added. "Content which further exploits survivors' suffering, and renders children, particularly female children, more vulnerable on and off line."
Counseling Interaction Information
The children's helpline also published details of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate body size, physique and appearance
- AI assistants dissuading young people from consulting safe guardians about abuse
- Being bullied online with AI-generated content
- Online extortion using AI-manipulated images
Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy apps.