The joint Department for Culture Media and Sport and Home Office report echoes recent calls for tougher regulation from the House of Lords' 'Regulating in a Digital World' report and the DCMS report on fake news and disinformation.
The publication calls for digital companies to outline measures they are taking to ensure a duty of care through signing up to a code of practice, as well as a series of proposals to tackle the prevalence of harmful content, including 'false or misleading information'. These measures include social media companies having to submit annual ‘transparency reports’ on the levels of this content on their platforms.
Jon Gerlis, CIPR senior policy officer, said the paper accepts that self-regulation has failed and therefore it is right that the law addresses this in a way that allows it to keep pace with advances in the digital world.
However, there are concerns about freedom of speech and the interpretive nature of content and disinformation, as described by PRCA director general Francis Ingham. "The question of online platform regulation touches on sensitive subjects such as freedom of speech and our shared desire to keep people – especially children – safe online," he said.
"The devil is in the detail here and the Government itself has argued that voluntary action from the industry has not gone far enough. While most people would support the intention behind this White Paper, if all of these online harms are not clearly defined then the consequence would be far larger than intended."
Rob Stone, digital strategy and innovation director at 3 Monkeys Zeno, agreed that while something needs to be done to protect people from harmful content online, there were two major issues with the proposal. First, that it could strengthen the big tech firms' grip on our online attention, limit competition and cause big problems for smaller alternatives. And second, while a lot of examples in the White Paper, such as those involving child exploitation or terrorist content, have clear-cut definitions, examples around trolling, disinformation or cyberbullying are less clear.
"The impact on comms could be significant," Stone said. "Would a witty Twitter retort from a brand be classed as cyberbullying? Could Facebook pull brand content because it has classed it as ‘disinformation’? Perhaps these are extreme examples, but in the future brands may find content approval on the major platforms more challenging and it will be the likes of Facebook and YouTube making those calls," he added.
Pledges to protect
Facebook, Twitter and Google all reiterated their commitment to keeping people safe online. Katy Minshall, head of public policy for Twitter UK, said the platform has been an active participant in the discussion between industry and the UK Government on how to keep people safe online and was already committed to prioritising the safety of its users. This was evidenced, she claimed, by the introduction of over 70 changes to their policies and processes in the last year to improve the health and safety of the public conversation online.
Rebecca Stimson, Facebook’s head of UK public policy, said: "While we’ve tripled the team working to identify harmful content and protect people to 30,000 and invested heavily in technology to help prevent abuse of our platform, we know there is much more to do. We are continually reviewing our policies with experts and working to ensure our reporting, artificial intelligence and machine-learning systems remain industry-leading.
"New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective."
Google’s public policy manager Claire Lilley added that the company hadn’t waited for regulation but had already created new technology, hired experts, and ensured that their policies were fit for the evolving challenges faced online. "Our work has the most impact when companies, Government and communities work together," said Lilley. "We look forward to looking at the detail of these suggestions and working in partnership to ensure a free, open and safer internet that works for everyone."
However, Lord Gilbert of Panteg, chairman of the Lords Communication Committee, added: "While the internet has clearly created huge benefits, self-regulation by online platforms has not gone far enough to address abuses such as hate speech, the dissemination of fake news and extremist content. Major platforms have failed to invest in their moderation systems, leaving moderators overstretched and inadequately trained. There is little recourse for a user to seek to reverse a moderation decision against them. A duty of care is therefore needed to implement minimum standards and to give effect to human rights, including freedom of expression.
"The need for further regulation of the digital world goes beyond online harms, however," Lord Gilbert added. "A comprehensive new approach to regulation is needed to address the diverse range of challenges that the internet presents, such as misuse of personal data and the concentration of digital markets."