The online safety bill is returning to parliament under the aegis of its fourth prime minister and seventh secretary of state since it was first proposed as an online harms white paper under Theresa May.
Each of those has been determined to leave their fingerprints on the legislation, which has swollen to encompass everything from age verification on pornography to criminalisation of posting falsehoods online, and Rishi Sunak and the digital and culture secretary, Michelle Donelan, are no different.
Some of the changes to the bill, which was unceremoniously pulled from the agenda in early summer as the government cleared parliamentary time to launch its own confidence motion backing Boris Johnson, are simple additions. After the law commission recommended updating legislation covering nonconsensual intimate images, the Department for Digital, Culture, Media and Sport folded the changes into the bumper bill, announcing plans to criminalise “downblousing” and the creation of pornographic “deepfakes” without the subject’s consent.
But others reflect the contentious nature of the legislation, which faces a balancing act between the government’s desire to make the UK “the safest place to be online”, and its fear of appearing overly censorious or, worse still, “woke”.
On Tuesday, Donelan triumphantly announced that the latest version of the online safety bill would be dropping efforts to regulate content deemed “legal but harmful”. Earlier drafts of the bill had hit upon a canny way to please both sides of the debate: rather than requiring social media companies to remove certain types of content outright, the bill simply requires them to declare a position on that material in their terms of service, and then enforce that position. Theoretically, a social media company could explicitly declare itself content with allowing harmful content on its platform, and receive no penalties for doing so.
But free speech groups, in and out of parliament, worried that the requirement would have a chilling effect, and social networks backed them up: few deliberately want to have harmful content on their platforms, but faced with a legal requirement to take action on it or face penalties, they could end up being forced to over-correct. For topics such as suicide or self-harm, aggressive over-moderation can cause real world harm just like lax policies can.
The push against those regulations reached its height during the Tory leadership contest, when the online safety bill was caricatured by its opponents, such as trade secretary Kemi Badenoch, as legislating for hurt feelings. And so upon its reintroduction, the “legal but harmful” provisions were stripped out, at least for content aimed at adults. And then the government went further: in an effort to burnish its free speech credentials, it added in new legal requirements forcing not over-moderation but under-moderation.
“Companies will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing this are clearly set out in their terms of service or are against the law,” DCMS announced. The rules, described as a “consumer friendly ‘triple shield’”, could prevent companies from acting rapidly to ensure the health of their platform, and leave them facing a legal risk if they take down content that they, and other users, would rather see removed.
Some of the changes to the bill are deep and technical. But others seem to be simple headline-chasing. The government has dropped the offence of “harmful communications” from the bill, after it became a lightning-rod for criticism with Badenoch and others arguing that it was “legislating for hurt feelings”.
But in order to remove the harmful communications offence, the government has also cancelled plans to strike off the two offences it was due to replace: parts of the Malicious Communications Act and the Communications Act 2003 which are far broader than the ban on harmful communications was to be. The harmful communications offence required a message cause “serious distress”; the Malicious Communications Act requires only “distress”, while the Communications Act 2003 is even softer, banning messages sent “for the purpose of causing annoyance, inconvenience or needless anxiety”. Those offences will now remain on the books indefinitely.
But becoming part of the psychodrama of the Conservative party is the only way legislative scrutiny can occur in this parliament. The rest of this monster bill, stretching over hundreds of pages and redefining the landscape of internet regulation for a generation, has barely been discussed in public at all. Proposals ranging from an attack on end to end encryption to the christening of a first-of-its-kind internet regulator in the shape of Ofcom are being treated as technocratic tweaks, but if they were given the time they deserved, it would be likely the legislative process would outlast a fifth prime minister as well.