Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images
The Guardian
—
12/11/2025
The number of AI-generated images of child sex abuses is rapidly increasing. But thanks to a new UK law, tech companies and child safety agencies are joining forces and being given legal testing permission, allowing experts or audit models to proactively screen for CSAM risk rather than wait for illegal content to appear. The law aims to help developers build and verify safety controls and to enable regulators and child-protection groups to evaluate models under strict conditions. Still, many questions remain, including who gets designated testing access, what conditions and oversight will apply, how illegal material will be securely handled or destroyed or used for testing or audit transparency, and what assurances will be made that testing doesn’t enable misuse.