Editorials, Opinion

The price of leaving Facebook unchecked

In August 2017, the Myanmar military, backed by civilian mobs, launched a coordinated attack against the Rohingya Muslims living in the country’s Rahkine state. The massacre — later deemed genocide by the United Nations and other international groups — resulted in the  deaths of 6,700 Rohigya men, women and children. More than 200 villages were burned to the ground and more than 700,000 Rohingya refugees fled the country.

This month, dual lawsuits were launched in the U.S. and U.K. against Facebook (now Meta) for its role in spreading hate speech that led to the ethnic cleansing of Rohingyas in the primarily Buddhist Myanmar (formerly Burma).

Prior to 2010, internet access in Myanmar was limited, but when it became widely available, Facebook became the website everyone used to communicate and share information. Almost immediately, inflammatory posts began to appear that denigrated the Rohingya.

The BBC reported Facebook received at least three warnings about concerning content: in 2013, by an Australian documentary maker; 2014, by a doctoral student; and 2015, by tech entrepreneur David Madden, who gave a presentation at Facebook’s headquarters about how the platform was being used to stir up hate.

In 2018, Facebook publicly admitted it was “too slow to prevent misinformation and hate.” Around the time Facebook made that statement, Reuters was still able to find 1,000 distinct posts from users in Myanmar denigrating or threatening violence against Rohingyas or other Muslims.  

And when Francis Haugen came forward this year and released the Facebook Papers, we learned that Facebook not only hadn’t learned its lesson — but it was actively choosing to amplify violent and divisive content because such content was more profitable.

Since the 2017 genocide in Myanmar, we’ve seen religious unrest in India that targeted Muslims — fueled by hate-filled messages and posts on Facebook’s platforms, said the Wall Street Journal, including WhatsApp — and the Jan. 6 riot at the U.S. Capitol — inspired by disinformation spread on Facebook and other social media.

Despite the numerous warnings — internally and externally, as we now know — Meta consistently allowed its platforms’ algorithms to promote dangerous content and failed to remove flagged or reported posts despite knowing that such extreme content was contributing to offline violence.

In the face of this, the question becomes how much power should one company be permitted to wield? Can Meta claim to be nothing more than a neutral platform when its business model relies on promoting some content over others? Can it be exempt from punishment when its choices cause real world harm?

It has become painfully obvious that, left to its own devices, Meta will always choose profits over human wellbeing and safety, and that choice has led to violence and death. It must be regulated by a higher power — government.

But how much control should government have? (And which government(s)?) Where is the line between promoting the general welfare and violating the right to free speech?

To curb offline violence fueled by online activity, it must become less profitable for platforms to share highly engaging but dangerous content. The simplest answer seems to lie in revising laws that prevent platforms like Facebook, Twitter and TikTok from being held liable for their users’ posts, such as Section 230 in the U.S. 

If platforms can be held responsible for users inciting violence — here and abroad — companies are more likely to prevent speech and disinformation from proliferating unchecked. For that, there must be a coordinated effort among international governments to create appropriate oversight that doesn’t impede reasonable free speech.