The amount of AI-generated child sexual abuse material found online rose by 14% last year, with the majority of videos showing the most extreme type of content, according to a safety watchdog.
The Internet Watch Foundation said it identified 8,029 AI-made images and videos of realistic child sexual abuse material (CSAM) in 2025. It added that there had been a more than 260-fold increase in videos.
The IWF said 65% of the 3,443 videos were classified as category A, the term for the most severe material under UK law. The corresponding figure for non-AI videos was 43%, said the watchdog, showing that the technology was being used to create more violent content.
Kerry Smith, the chief executive of the IWF, said: “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.”
One IWF analyst said conversations between paedophiles on the dark web showed innovations in the technology were “regarded with delight” by users of CSAM. The discussions centre on AI systems’ increasingly realistic outputs and, as they improve, their ability to add audio to video or successfully manipulate imagery of a real child known to an offender.
The UK-based IWF operates a hotline and has a global remit to monitor child sexual abuse content. It said offenders were also discussing the possibilities for using “agentic” systems, which can carry out tasks autonomously.
Tech companies and child protection agencies are being given the power in the UK to test whether AI tools can produce CSAM, in a move that ministers said last year was about stopping abuse before it happened.
Under the change, the government will give designated AI companies and child safety organisations permission to examine generative artificial intelligence models – the underlying technology for chatbots such as ChatGPT and image generators such as Google’s Veo 3 – and ensure they have safeguards to prevent them from creating such material.
“Children, victims and survivors cannot afford for us to be complacent,” said Smith. “New technology must be held to the highest standard. In some cases, lives are on the line.”
The amount of CSAM verified by the IWF has risen sharply as the proficiency and availability of systems have increased, with videos increasing in particular.
The IWF also published polling that showed eight out of 10 UK adults wanted the UK government to introduce legislation that ensured AI systems were developed with safety as a priority and “future-proofed from causing harm”. Last year, the government announced a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material.

7 hours ago
9

















































