
Tech companies will be legally required to remove non-consensual intimate images within 48 hours, or face heavy fines and potential UK blocks, under new measures unveiled by the Prime Minister
The proposals, introduced through an amendment to the Crime and Policing Bill, require platforms to act swiftly once content is flagged.
Companies that fail to comply could be fined up to 10 per cent of their qualifying worldwide revenue or have their services restricted in the UK.
Sir Keir Starmer said the reforms were part of a broader effort to tackle violence against women and girls online, weeks after Elon Musk’s controversial chatbot, Grok, was used to generate sexualised images of women and minors without consent.
“The online world is the frontline of the 21st century battle against violence against women and girls,” he explained.
“That’s why my government is taking urgent action against chatbots and ‘nudification’ tools. Today we are going further, putting companies on notice so that any non-consensual image is taken down in under 48 hours.”
Victims only need to report an image once, and under the new plans and platforms, they will have to remove the content wherever it appears and prevent it from being re-uploaded.
Watchdog Ofcom is considering digital tagging systems so that these images shared without consent are automatically detected and blocked, mirroring systems already used to remove child sexual abuse material.
Creating or sharing non-consensual intimate images is already a criminal offence in the UK, including AI-generated deepfakes.
A parliamentary report published last year recorded a 20.9 per cent increase in reports of intimate image abuse in 2024.
Another, separate government review found young men and boys are increasingly targeted for financial sexual extortion, often referred to as “sextortion”.
Speaking on BBC Breakfast, Starmer said the new rule would mean victims would not have to play “whack-a-mole” as images resurface on different sites.
“Tech companies are already under that duty when it comes to terrorist material so it can be done. It’s a known mechanism,” he said.
Tech secretary Liz Kendall added: “The days of tech firms having a free pass are over. No woman should have to chase platform after platform, waiting days for an image to come down. Under this government, you report once and you’re protected everywhere.”
AI tools and social media age under review
The changes land just days after Starmer confirmed that AI chatbots, including xAI’s Grok, Google’s Gemini, and OpenAI’s ChatGPT, will be explicitly brought within the scope of the Online Safety Act.
“No platform gets a free pass,” he said, after Ofcom made urgent contact with X over Grok’s outputs.
The Act was originally designed around user-to-user platforms, but ministers argue it must now cover systems that generate content directly.
Alongside the takedown measures, the government is consulting on whether to introduce a minimum age for social media use, potentially banning under-16s from holding accounts.
Australia began enforcing a simir ban in December, with fines of up to £26.5m for non-compliant platforms.
Daniela Hathorn, senior market analyst at Capital.com, said earlier this week that measures targeting specific UK platform features were “likely to have limited – if any – impact on markets”, noting that major tech firms derive only a small fraction of global revenues from the UK.
Andy Lulham, chief operating officer at online safety provider Verifymy, said: “Fast-tracking the removal of non-consensual intimate images will be a welcome addition to the Crime and Policing Bill. Fining or blocking services that miss the 48 hour window should ensure the new law has teeth.”
“Taking down images, however, is only half the battle. Platforms, including AI systems, must be safe-by-design with robust guardrails to prevent illegal or harmful material from being created in the first place.”
The Crime and Policing Bill is currently progressing through the House of Lords.
Ministers claim it will strengthen enforcement powers and ensure intimate image abuse is treated with the same seriousness as child sexual abuse material and terrorism content.