Artificial intelligence has undoubtedly brought plenty of useful tools to the internet. But it has also handed one of the most horrific forms of abuse a grim new boost. Recent reporting and watchdog findings point to the same ugly pattern of generative AI helping offenders create child sexual abuse imagery on a greater scale.
These are now becoming increasingly realistic, and in formats that are becoming harder for platforms, regulators, and child-safety groups to deal with.
How AI is making the scale worse and content more extreme

Back in February, Reuters revealed that actionable reports of AI-generated child sexual abuse imagery had more than doubled over the past two years, while the Internet Watch Foundation later said it identified 8,029 AI-generated images and videos of child sexual abuse in 2025 alone. This grim picture was also laid out in a Bloomberg report on how generative AI is changing the child sexual abuse material landscape in the US.
Investigators aren’t just dealing with AI-generated pornographic images and videos anymore, there are even manipulated images of real children and even chatbot conversations where offenders allegedly seek grooming advice or role-play sexual abuse. Meanwhile, law enforcement is burning time trying to figure out whether a child in an image is real, digitally altered, or entirely fake.
Real cases are getting more disturbing
The report points to a Minnesota case involving William Michael Haslach, a school lunch monitor and traffic guard accused of using AI tools to digitally undress children in photos he had taken at work. Federal agents identified more than 90 victims and found nearly 800 AI-generated abuse images on his devices. This showcases how offenders are increasingly using everyday photos pulled from social media to create explicit material.
Investigators are drowning in volume and bad leads
The scale is getting ugly fast. Bloomberg reports that NCMEC received 1.5 million AI-linked CSAM reports in 2025, up from 67,000 a year earlier and 4,700 in 2023. At the same time, investigators say automated moderation systems are generating a flood of junk tips, swamping already overstretched task forces. And every wrong call burns time that could have gone toward a child facing immediate harm.