Should social media platforms be the arbitrator of truth?

When social media is proven a direct accomplice in amplifying hate speech and fake news, they cannot abdicate responsibility and plead innocence.

(Shutterstock)
(Shutterstock)

Facebook’s Oversight Board recently ruled that the company was right to ban Donald Trump due to a clear violation of Facebook’s community standards. However, it also called out Facebook’s ‘indefinite’ ban and non-uniform moderation standards, recommending that the firm create a ‘proportionate’ procedure for equal application of moderation policies and revisit the ban within six months.

The Oversight Board is often dismissed as a way for Facebook to absolve itself of responsibility when controversial decisions over freedom of speech need to be made. Now that it has effectively punted the decision back to Facebook, an important question arises—one  that social media companies have so far been trying to skirt around.

Should social media platforms be the arbitrator of truth on the Internet, and if so, how far should they go?

From social sharing to social influencing

Most social media platforms originated as tools to stay in touch, make new friends and as general sources of entertainment. Facebook was built as a social networking service for Harvard students. Twitter was designed to make text messaging work across multiple channels and devices. Instagram was launched to share photos with friends.

Today, social media has evolved into a powerful platform to shape thoughts and ideas. With its easy accessibility and the capacity for instant global broadcasting, social media has given everyone a voice and a stage. However, the enduring debate around freedom of speech is not just whether all voices have an equal right to be heard, but also who should moderate that right.

Enabler vs arbitrator

Historically, social media companies have been keen to distance themselves from this debate on content moderation as it places upon them enormous responsibilities and repercussions. They often frame their reluctance to moderate or shut down problematic content as a duty; impartiality is necessary, they say, to safeguard freedom of speech.

But the right to freedom of speech inadvertently comes with strong caveats: responsible behaviour (online) and the need to respect the rights of others. When social media is proven a direct accomplice in facilitating devastating events—such as the Capitol insurrection—by amplifying hate speech and fake news, then social media platforms cannot abdicate responsibility and plead innocence. It is disingenuous, at best, to preach championing free speech while refusing to acknowledge, and ultimately shirking the duty to enforce the responsible behaviour online.  

Attempts at accountability

Big tech platforms are clearly aware that changes have to be brought about in this area, and some headway has been made. These platforms now have community engagement rules and channels to report offensive or problematic content which is reviewed by moderation teams. They award authentic pages and profiles blue ‘verified’ badges to prevent impersonators and fake news. Even Facebook’s aforementioned Oversight Board was meant to be an independent ‘Supreme Court’ to provide a check and balance.

But it is not enough. Being fully funded by Facebook means the Oversight Board cannot escape public assumption of inherent bias, and its recent attempt to distance itself suggests an awareness of this situation. Elsewhere, there are horror stories aplenty about the traumatic conditions that Facebook’s teams of moderators operate in, which—among other ramifications–means that moderation could be inaccurate, inconsistent and ineffective.

Given the record-breaking billion-dollar revenues of these social media platforms, it’s hard to see these non-committal stances and half-hearted efforts as anything other than a business decision. Staying ‘neutral’ means a wider audience and thus more ad revenue. If top decision-makers such as Mark Zuckerberg and Jack Dorsey wanted their companies to do better, those changes would be visible. The fact that they are not speaks volumes.

With the prevalence of social media, online spaces are becoming eerie reflections of offline spaces. Social media companies should be held accountable to their users in the same way that public utility providers are. They can emulate how offline public spaces are built and design better digital spaces by having regulations and norms that govern behaviour, collectively determined by regulators, stakeholders and users for a healthier digital community.

Who will fill the void?

In late April this year, major football clubs, sporting bodies, players and athletes joined a four-day social media boycott to highlight the constant abuse and discrimination they endure on these platforms. The first line of the statement issued by the English Premier League (EPL) said simply: “Social media companies must do more to stop online abuse.”

Vitriol on social media towards sports figures and bodies is nothing new. Just check the comments on any post by any football club after an EPL match. Clubs have asked social media platforms to moderate this clearly abusive content for years, but the platforms have shown that they are either unwilling to or incapable of doing so.

If social media companies refuse to step up, then who will? My bet is on governments, and that will come at a cost. We’re already seeing India require social media platforms to appoint local representatives and try to implement new rules that require traceability. Twitter’s offices in India were raided by police after the company put a ‘manipulated media’ label on tweets from members of the ruling party. The government of Belarus diverted a plane to arrest an outspoken dissident who used social media to criticise the government. Making governments the arbiter of truth can and will have serious ramifications for the neutrality and independence that social media companies so proudly protect.

The legal question

Much of the debate around ‘who should be the gatekeeper?’ boils down to legalities. Are these social media platforms publishers themselves, or does that role belong solely to users who post content? The fact that the digital world is adapting faster than our legal frameworks can keep up with makes this even more difficult to answer. Many issues that we face now with social media use are unprecedented.

It is fair, perhaps, to say that social media platforms should not necessarily be the sole and ultimate authority on truth. Ideally, internet regulation and moderation would be the result of a collaborative relationship between them and progressive governments, robust legal frameworks and responsible users.

But while we navigate as a society towards this utopian understanding, what is clear is that social media companies have the capacity to and should do more right now. They gave everyone a voice. Now they must take ownership of the consequences—both good and bad.

Pranav Rastogi is managing director of Redhill


Click here to subscribe to the FREE Asia PR & comms bulletin to receive dedicated news, features and comment from the region straight to your inbox. Make sure you register for the site to access more than one story per month.

To submit a news, comment, case study or analysis idea for the Asia bulletin, email Surekha.Ragavan@haymarket.asia

Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Register
Already registered?
Sign in