by Suzanne Nossel
A couple weeks ago, a federal judge banned communications between large parts of the federal government and social media platforms. The case was brought by two Republican attorneys general and several individuals to allege that the government has unconstitutionally censored online speech from conservatives.
The issue of government interference in what social media platforms can publish is real. But the judge’s solution — widely halting contact between the government and Big Tech — is both legally dubious and practically dangerous. It’s important that online speech, irrespective of political viewpoint, flourish without the specter of censorship. But it’s also essential that the government be allowed to engage with social media giants to address manifest harms from online content.
The solution lies not in this ban, but in exposing government communications with tech to scrutiny so that both parties are held accountable for ensuring that their relationships serve the public interest.
The Biden administration has appealed Judge Terry A. Doughty’s 155-page ruling, which remains in effect as the case advances. Given the vast dominion social media companies hold over public discourse, real concerns exist about authorities potentially using platforms to punish critics, invade privacy and limit political speech. Yet instead of surgically targeting the most concerning facets of government engagement with tech companies, the judge’s sweeping injunction prohibits even routine and crucial exchanges between government and platforms regarding issues such as child safety and public health.
In fact, this unprecedented near-blanket ban is, in itself, a major infringement on freedom of speech. Although the order provides exceptions, including for content relating to criminal conduct and national security, those loopholes are too narrow. They will not allow government officials to alert social media companies when, for example, false cures for a raging disease or other forms of dangerous quackery go viral. Nor do they allow the government a role in countering false information about election results. Tech watchdogs also fear the ruling will give online behemoths a convenient excuse to scale back costly efforts to remove disinformation, harassment and other harmful content from their platforms on the grounds that they cannot risk being seen as doing the government’s bidding.
The government has argued that, in many instances, its overtures took the form of flagging and conveying concerns about specific content rather than demanding its removal. But even friendly calls from officials can be read as intimidation.
The Meta Oversight Board, on which I serve, is a body of independent experts commissioned by the company to review content moderation decisions. The board has documented how Facebook and Instagram privilege certain users, including government officials, by affording them more leeway for posts that would otherwise be swiftly removed for violating company standards. That such officials also enjoy special access to get problematic posts removed further illustrates how platforms can aid the powerful, often in hidden ways. Whereas ordinary users fighting online harassment or false information can feel as if they are shouting into a void, top officials more easily have their requests accommodated. While this unique influence can be used responsibly to protect citizens from online harms, it also poses risks.
Around the world, we’ve seen governments weaponize concepts including disinformation and fake news to silence critics. Tech platforms have sometimes been accessories to repression, focusing on keeping government interlocutors happy to prevent shutdowns and other legal troubles that could impair operations and cut profits.
The public has a right to hold government authorities and tech platforms accountable for how they collaborate. To do so, citizens need far greater visibility for these relationships. While certain social media platforms including Meta and Google voluntarily disclose government content requests, as last Tuesday’s decision shows, dealings between officials and tech executives go beyond takedown demands for specific pieces of content.
Rather than cutting off these exchanges, regulators should impose transparency requirements that force companies to reveal the breadth of communications they receive from the government and how those contacts have affected content on the platforms. Subject to limited redactions on legal or national security grounds, such disclosures would help elucidate how government is influencing social media, and vice versa.
In the meantime, companies should expand their voluntary disclosures, allowing civil society and other watchdog organizations to assess whether such dealings are in the interests of users. Knowing that their encounters would be the subject of detailed public reports would help deter government officials from abusing their clout.
Doughty’s overbroad injunction should be overturned on appeal. At the same time, legitimate concerns about the ties between government and tech must be addressed. The best way to do so is subjecting such interactions to the light of day.