Last month, I noted that the Centers for Disease Control and Prevention had repeatedly exaggerated the scientific evidence supporting face mask mandates during the COVID-19 pandemic. Facebook attached a warning to that column, which it said was “missing context” and “could mislead people.”
According to an alliance of social media platforms, government-funded organizations, and federal officials that journalist Michael Shellenberger calls the “censorship-industrial complex,” I had committed the offense of “malinformation.” Unlike “disinformation,” which is intentionally misleading, or “misinformation,” which is erroneous, “malinformation” is true but inconvenient.
As illustrated by internal Twitter communications that journalist Matt Taibbi highlighted last week, malinformation can include emails from government officials that undermine their credibility and “true content which might promote vaccine hesitancy.” The latter category encompasses accurate reports of “breakthrough infections” among people vaccinated against COVID-19, accounts of “true vaccine side effects,” objections to vaccine mandates, criticism of politicians, and citations of peer-reviewed research on naturally acquired immunity.
Disinformation and misinformation have always been contested categories, defined by the fallible and frequently subjective judgments of public officials and other government-endorsed experts. But malinformation is even more clearly in the eye of the beholder, since it is defined not by its alleged inaccuracy but by its perceived threat to public health, democracy or national security, which often amounts to nothing more than questioning the wisdom, honesty or authority of those experts.
Taibbi’s recent revelations focused on the work of the Virality Project, which the taxpayer-subsidized Stanford Internet Observatory launched in 2020. Although Renee DiResta, the SIO’s research manager, concedes that “misinformation is ultimately speech,” meaning the government cannot directly suppress it, she says the threat it poses “require[s] that social media platforms, independent researchers, and the government work together as partners in the fight.”
That sort of collaboration raises obvious free speech concerns. If platforms like Twitter and Facebook were independently making these assessments, their editorial discretion would be protected by the First Amendment. But the picture looks different when government officials, including the president, the surgeon general, members of Congress, and representatives of public health and law enforcement agencies, publicly and privately chastise social media companies for not doing enough to suppress speech they view as dangerous.
Such meddling is especially alarming when it includes specific “requests” to remove content, make it less accessible or banish particular users. Even without explicit extortion, those requests are tantamount to commands because they are made against a backdrop of threats to punish recalcitrant platforms.
The threats include antitrust action, increased liability for user-posted content, and other “legal and regulatory measures.” Surgeon General Vivek Murthy said such measures might be necessary when he demanded a “whole-of-society” effort to combat the “urgent threat” posed by “health misinformation.”
In a federal lawsuit filed last year, the attorneys general of Missouri and Louisiana, joined by scientists who ran afoul of the ever-expanding crusade against disinformation, misinformation and malinformation, argue that such pressure violates the First Amendment. This week, Terry A. Doughty, a federal judge in Louisiana, allowed that lawsuit to proceed, saying the plaintiffs had adequately alleged “significant encouragement and coercion that converts the otherwise private conduct of censorship on social media platforms into state action.”
Doughty added that the plaintiffs “have plausibly alleged state action under the theories of joint participation, entwinement, and the combining of factors such as subsidization, authorization, and encouragement.” Based on that analysis, he ruled that the plaintiffs “plausibly state a claim for violation of the First Amendment via government-induced censorship.”
Whatever the ultimate outcome of that case, Congress can take steps to discourage censorship by proxy. Shellenberger argues that it should stop funding groups like the ISO and “mandate instant reporting of all communications between government officials and contractors with social media executives relating to content moderation.”
The interference that Shellenberger describes should not be a partisan issue. It should trouble anyone who prefers open inquiry and debate to covert government manipulation of online speech.