Blowing in the wind

How to deal with online misinformation is one of the most fraught problems we face in contemporary media. Australia knows this all too well from the debate that surrounded the government’s failed attempt to introduce the Combatting Misinformation and Disinformation Bill last year. Any attempt to regulate misinformation must contend with the need to protect freedom of expression, but it must also contend with the fact that digital platforms themselves restrict freedom of expression through their own content-moderation policies, and often do so in a way that lacks transparency and consistency.
With its stronger constitutional protections for free speech, the US has never seriously considered regulation to address misinformation. But constitutional protections don’t apply to private companies, which under Section 230 of the Communications Decency Act have the discretion to impose content policies as they see fit. This should mean that these policies are, at least in principle, protected from political influence. But Meta’s announcement in January of changes to its content policies – including that it would abandon its fact-checking program in the US – shows that discretion can easily sway with the political wind. In a statement on the changes, Mark Zuckerberg said that the US election result signalled a ‘cultural tipping point towards once again prioritising free speech’. This is reinforced by the fact – sometimes missed in coverage of the announcement – that the changes apply, at least so far, only in the US.
Critically, the fact-checking program, instated shortly after the 2016 US election, operates at arms-length from Meta’s platforms, with decisions about misinformation devolved to independent organisations certified by the International Fact-Checking Network (IFCN). In this way, Meta effectively transfers its discretion to fact-checkers and avoids being the arbiter of truth. Fact-checkers use journalistic verification techniques, including expert consultation, to arrive at their decisions. This connection with journalism and reliance on experts distinguishes it from crowd-sourced moderation such as community notes, which Meta will move to in the US in lieu of the fact-checking program. It also exposes fact-checking to perceptions and allegations of elite-driven bias. This may undermine both the effectiveness and legitimacy of expert fact-checking as a content-moderation tool, whether the bias is real or not.
In 2020, Meta further devolved its power over content with the creation of the Oversight Board. The Oversight Board is an independent body which considers escalated appeals on critical matters and makes policy recommendations to Meta. The board is managed by a trust and financed by an endowment from Meta. Since its inception, the board has overturned Meta’s decision in 78% of cases that it has taken on, and its recommendations have driven Meta towards greater transparency and consistency in its policies and decision-making.
Interestingly, its explicit commitment to human rights principles has often seen the board promoting greater protection of free expression than Meta’s own policies and decisions, particularly in the area of political discourse. Indeed, shortly after Meta’s announcement that it would abandon fact-checking in the US, the board issued a statement which, although circumspect, welcomed the decision and echoed Zuckerberg’s concerns about perceptions of political bias.
The potential for the apprehension of bias means moderating online content will always be difficult. But independence from both government and unchecked platform discretion is critical. The Oversight Board provides this independence, but there is nothing standing in the way of Meta abandoning that project as well, at least in the US. In the EU, by contrast, the winds are blowing the other way. In October, the Oversight Board Trust announced the establishment of the Appeals Centre Europe, an independent dispute-resolution body that extends to YouTube and TikTok as well as Facebook. The body has been recognised under the Digital Services Act, which places general transparency and accountability obligations on very large platforms. In the absence of such legislation, it seems very unlikely that we will see genuinely independent and accountable decision-making from digital platforms that can resist changes in political winds. Australia would do well to take note.

Michael Davis,
CMT Research Fellow