The Internet Watch Foundation said AI-generated child pornography is more extreme and violent than what has been made in the past
Published on
24/03/2026 - 11:47 GMT+1
AI-generated content is more explicit, extreme and complex than other types of child pornography that have been seen in the past, says the Internet Watch Foundation.
Artificial intelligence-generated imagery depicting the sexual abuse of children surged by 14 percent in 2025, as investigators face growing difficulty distinguishing synthetic content from real photographs, according to a new report.
The IWF said this suggests that perpetrators are using AI tools to make more explicit, extreme and complex content than they were before.
“We now face a technological landscape that can generate infinite violations with unprecedented ease,” Kerry Smith, the IWF’s CEO, said in the report.
How are perpetrators using AI?
The study also sheds light on how offenders are actively developing and sharing tools.
Researchers observed discussions on the dark web where perpetrators trade and work together to develop custom AI models and databases that generate abusive material.
In one example, researchers identified an advertisement offering “custom courses” that promised to teach users how to create AI-generated images of teenagers.
“Single applications can now generate abusive imagery with minimal effort, removing the need for technical expertise and significantly lowering barriers to entry,” the report found
In many cases, models require only a single reference image to produce child sexual content.
While AI is making it easier for anyone to create simple CSAM content, the report said there are a few well-known creators with more advanced skills who make longer, more sophisticated material
Private Instagram messages will no longer be encrypted, Meta says
The extension, set to expire on April 3, is intended to give lawmakers time to agree on a long-term legal framework to combat child sexual abuse online.
In a press release, legislators said that any future measures must remain “proportional” and should apply only to content already flagged as potential child sexual material, instead of enabling surveillance of all encrypted conversations.
The IWF said it also wants the EU AI Act to be amended to label AI systems that can be used to generate child sexual content as “high risk.” Under the Act, a “high risk” designation would mean systems have to undergo more rigorous testing before being made available in the EU.
This designation would mean less CSAM content because the tools would be more thoroughly tested, the report added.
Using AI for sexual exploitation of children and child pornography is already illegal under the EU AI Act, and the legislation prevents any system that explicitly does this from being available in the bloc.
Take a look at China’s first robot-run volunteer service station
Senior European journalist suspended over made-up AI quotes
NASA aims for April launch after delays to Moon mission
CSAM (Child Sexual Abuse Material)
Related Stories
Source: This article was originally published by Euronews
Read Full Original Article →
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment