Exclusive: Experts have warned social media platforms must do more to stop non-consensual image abuse
The Metropolitan Police received a 120 per cent rise in complaints about non-consensual intimate image abuse (NCII) , according to FOI data obtained by online safety provider Verifymy.
Some 1,766 NCII complaints were made last year in Greater London, a 17 per cent rise from 1,523 the year before, and more than double the 805 recorded in 2020, according to Metropolitan Police complaint data seen by The Independent.
The Internet Watch Foundation said just last week that in 2025 there had been a more than 260-fold increase in AI-generated child sexual abuse videos from the year before.
Video models, nudification apps, subscription platforms and agentic AI systems were enabling offenders to produce and distribute illegal content at scale, the watchdog warned, allowing them to manipulate images of real children and simulate explicit chats with child characters.
Social media platforms will have to remove any non-consensual images reported within 48 hours , under the new Crime and Policing Bill, which currently in the final stages of legislation.
Those that don’t risk hefty fines or having their services blocked in the UK.
Nudification tools used for AI deepfakes will be banned under the new rules.
Victims of NCII will have up to three years to report the crime, up from the current six months.
While the Crime and Policing Bill looks set to crack down on NCII, experts are warning that platforms must do more the tackle the growing harm.
Emma Robert-Tissot, Head of Partnerships at Verifymy, said: “In an age of hyper-realistic image generation, everyone should have control over how their identity is used and represented online.
Consent management that supports this is no longer a technical consideration, it is a fundamental right.
“While content moderation plays an important role, it cannot identify all forms of non-consensual intimate image abuse, particularly as synthetic content becomes more advanced.
Platforms must therefore take a more holistic approach - combining prevention, consent and detection - to effectively tackle this growing harm.”
Commenting on the FOI findings, a Met Police spokesperson said that tech firms needed to crack down on methods of NCII offending.
“Non‑consensual intimate image (NCII) abuse can have a devastating and lasting impact on victims.
The online world is changing rapidly, and reporting of this type of offending has increased significantly over the past five years,” they said.
“We continue to strengthen our response to tech-enabled abuse by bolstering specialist teams and investing in new technology.
This includes technology that allows officers to review large volumes of messaging and an NCII toolkit providing vital information on what this abuse looks like and its impact on victims to enable police to improve their response.
“While using technologies and working alongside our safeguarding partners to provide support for victims, we continue our call to tech firms to design out these methods of offending.”
A government spokesperson said: “Sharing or creating intimate images without consent is a vile crime and we are taking immediate action to tackle this growing issue.
“We have made the creation of intimate images without consent a crime with up to six months in prison and we are banning AI tools which generate deepfake sexual images of people without consent, with developers and suppliers facing up to three years in prison.”
Related Stories
Source: This article was originally published by The Independent
Read Full Original Article →
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment