The government is facing renewed pressure to strengthen online safety laws after rejecting key recommendations designed to curb the viral spread of misinformation, despite agreeing with most of MPs’ findings on the scale of the problem.
The Science, Innovation and Technology Committee today published the government and Ofcom’s responses to its July report, which concluded that the Online Safety Act (OSA) does not tackle the algorithmic amplification of false content and leaves users exposed to rapidly spreading misinformation — much of it supercharged by generative AI.
Both the government and Ofcom accepted the committee’s assessment that misinformation poses significant risks, yet ministers declined to adopt several major recommendations, including calls to extend online safety legislation to explicitly cover generative AI platforms. The committee argued such platforms are capable of spreading large volumes of false content and should be regulated in line with other high-risk online services.
The government rejected that proposal, insisting AI-generated content is already covered under the OSA — a position that contradicts Ofcom’s earlier testimony to the committee, in which the regulator said the legal status of generative AI was “not entirely clear” and suggested more work was needed.
MPs also warned that misinformation cannot be meaningfully addressed without confronting the digital advertising business models that incentivise social media companies to promote harmful content. The government acknowledged the link between advertising and amplification but refused to commit to reform, instead saying the issue would be kept “under review”.
Committee chair Dame Chi Onwurah MP criticised the government’s reluctance to take action. “If the government and Ofcom agree with our conclusions, why stop short of adopting our recommendations?” she said. “The committee is not convinced by the argument that the OSA already covers generative AI. The technology is evolving far faster than the legislation, and more will clearly need to be done.”
She added that failure to tackle the monetisation of harmful content leaves a major loophole: “Without addressing the advertising-based models that incentivise platforms to algorithmically amplify misinformation, how can we stop it?”
Onwurah warned that complacency poses real risks to public safety. “It is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated,” she said. “The government urgently needs to plug the gaps in the Online Safety Act before further harm occurs.”
Read more:
Government risks further harm if it fails to act on viral misinformation, MPs warn


