The Australian Government’s proposed new laws to crack down on misinformation and disinformation have drawn intense criticism for their potential to restrict free expression and political dissent, paving the way for a digital censorship regime reminiscent of Soviet Lysenkoism…
Under the draft legislation, the Australian Communications and Media Authority (ACMA) will gain considerable expanded regulatory powers to “combat misinformation and disinformation,” which ACMA says poses a “threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy.”
Digital platforms will be required to share information with ACMA on demand and to implement stronger systems and processes for handling misinformation and disinformation.
ACMA will be empowered to devise and enforce digital codes with a “graduated set of tools” including infringement notices, remedial directions, injunctions and civil penalties, with fines of up to $550,000 (individuals) and $2.75 million (corporations). Criminal penalties, including imprisonment, may apply in extreme cases.
Controversially, the government will be exempt from the proposed laws, as will professional news outlets, meaning ACMA will not compel platforms to police misinformation and disinformation disseminated by official government or news sources.
As the government and professional news outlets have been, and continue to be, a primary source of online misinformation and disinformation, it is unclear whether the proposed laws will meaningfully reduce online misinformation and disinformation. Rather, the legislation will enable the proliferation of official narratives, whether true, false or misleading, while quashing the opportunity for dissenting narratives to compete.
Faced with the threat of penalty, digital platforms will play it safe. This means that for the purposes of content moderation, platforms will treat the official position as the ‘true’ position, and contradictory information as ‘misinformation.’
Some platforms already do this. For example, YouTube recently removed a video of MP John Ruddick’s maiden speech to the New South Wales Parliament on the grounds that it contained ‘medical misinformation,’ which YouTube defines as any information that, “contradicts local health authorities’ or the World Health Organization’s (WHO) medical information about COVID-19.”
YouTube has since expanded this policy to encompass a wider range of “specific health conditions and substances,” though no complete list is given as to what these specific conditions and substances are. Under ACMA’s proposed laws, digital platforms will be compelled to take a similar line.
This flawed logic underpins much of the current academic misinformation research, including the University of Canberra study which informed the development of ACMA’s draft legislation. Researchers asked respondents to agree or disagree with a range of statements ranging from the utility of masks in preventing Covid infection and transmission, to whether Covid vaccines are safe. Where respondents disagreed with the official advice, they were categorised as ‘believing misinformation,’ regardless of the contestability of the statements.
The potential for such circular definitions of misinformation and disinformation to escalate the censorship of true information and valid expression on digital platforms is obvious.
Free expression has traditionally been considered essential to the functioning of liberal democratic societies, in which claims to truth are argued in public squares. Under ACMA’s bill, the adjudication of what is (and is not) misinformation and disinformation will fall to ‘fact-checkers,’ AI, and other moderation tools employed by digital platforms, all working to the better-safe-than-sorry-default of bolstering the official position against contradictory ‘misinformation.’