UK Technology Firms and Child Protection Officials to Examine AI's Capability to Create Abuse Content

Tech firms and child safety organizations will receive permission to evaluate whether artificial intelligence tools can generate child exploitation material under recently introduced British laws.

Substantial Increase in AI-Generated Harmful Content

The declaration came as findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the authorities will permit designated AI developers and child protection groups to inspect AI models – the underlying technology for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing images of child exploitation.

"Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the risk in AI systems early."

Tackling Regulatory Obstacles

The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by helping to halt the creation of those materials at their origin.

Legal Structure

The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or sharing AI models developed to generate child sexual abuse material.

Practical Consequences

This week, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.

"When I learn about young people facing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.

Concerning Statistics

A prominent online safety organization reported that instances of AI-generated exploitation material – such as online pages that may include numerous images – had more than doubled so far this year.

Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to guarantee AI tools are secure before they are released," commented the chief executive of the online safety organization.

"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing criminals the ability to create potentially endless amounts of sophisticated, lifelike exploitative content," she added. "Material which further exploits survivors' trauma, and makes children, especially female children, less safe both online and offline."

Support Session Data

Childline also released information of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Using AI to evaluate weight, physique and appearance
  • AI assistants discouraging children from talking to safe adults about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including using AI assistants for support and AI therapeutic apps.

Marilyn Morgan
Marilyn Morgan

Elara is a seasoned travel writer and luxury lifestyle expert, sharing unique insights from global adventures.