Pandemic Speech

By Sancho McCann ·

An ex­am­ple mis­in­for­ma­tion warn­ing used by Twitter.

Social-me­dia plat­forms have adopt­ed an un­prece­dent­ed ap­proach to mis­in­for­ma­tion re­lat­ing to COVID-19: they are now will­ing to delete con­tent that is false. This ar­ti­cle sit­u­ates COVID-19 mis­in­for­ma­tion in the spec­trum of hate speech, mis­in­for­ma­tion, and dis­in­for­ma­tion. It then ex­plores the so­cial-me­dia plat­forms’ ap­proach­es to polic­ing speech be­fore and dur­ing the COVID-19 pan­dem­ic through the dual lens­es of the American First Amendment the Canadian Charter. Social-me­dia plat­forms ap­pear to be mak­ing their con­tent de­ci­sions un­der a frame­work more akin to that from Canada’s Charter than from the American First Amendment. Further, it is pos­si­ble, in ad­mit­ed­ly un­like­ly cir­cum­stances, that Canada could re­quire so­cial-me­dia plat­forms to per­form con­tent-based dele­tion of specific mis­in­for­ma­tion dur­ing a pan­dem­ic.

This ar­ti­cle was pre­pared in April 2020 for my Communications Law course at UBC’s Peter A. Allard School of Law. I thank Professor Festinger and my class­mates for the valu­able dis­cus­sions about me­dia and com­mu­ni­ca­tions. The ar­ti­cle is al­ready dat­ed. When I wrote it, @re­al­don­aldtrump still had a Twitter ac­count and Facebook’s Oversight Board was not yet op­er­a­tion­al. I am most skep­ti­cal of my Section 1 analy­sis at the end of the ar­ti­cle, es­pe­cial­ly think­ing about how Facebook’s Oversight Board re­cent­ly ap­plied International Human Rights stan­dards to over­turn Facebook’s dele­tion of COVID-19 mis­in­for­ma­tion.

Introduction

On 27 March 2020, Twitter delet­ed a tweet writ­ten by Rudy Giuliani that con­tained a false claim that hy­drox­y­chloro­quine was “100% effective” at treat­ing COVID-19. On 30 March 2020, Facebook, Twitter, and Youtube delet­ed videos of the Brazilian President, Jair Bolsonaro, that claimed “hy­drox­y­chloro­quine is work­ing in all places.” These dele­tions reflect an un­prece­dent­ed change in the way ma­jor so­cial net­works treat mis­in­for­ma­tion.

In the ear­ly stages of the COVID-19 pan­dem­ic, Twitter, Facebook, YouTube, Google, and oth­ers joint­ly com­mit­ted to com­bat COVID-19 mis­in­for­ma­tion. Twitter and Facebook broad­ened their definitions of harm to in­clude COVID-19 mis­in­for­ma­tion. For ex­am­ple, Twitter now rec­og­nizes as harm­ful “con­tent that goes di­rect­ly against guid­ance from au­thor­i­ta­tive sources of glob­al and lo­cal pub­lic health in­for­ma­tion.” Facebook is de­vi­at­ing from its long-stand­ing prac­tice of not fact-check­ing politi­cians. These changes are a specific re­sponse to COVID-19. These plat­forms have nev­er be­fore had a pol­i­cy to delete pseudoscientific med­ical mis­in­for­ma­tion (e.g. claims that home­opa­thy or acupunc­ture are effective, or anti-vac­ci­na­tion pro­pa­gan­da). Only re­cent­ly, in late 2019, did Facebook be­gin down­rank­ing—not delet­ing—con­tent that con­tained ex­ag­ger­at­ed health claims (e.g. “mir­a­cle cure[s]”) or that ad­ver­tised a health prod­uct.

In this ar­ti­cle, I sit­u­ate COVID-19 mis­in­for­ma­tion with­in a spec­trum of hate speech, dis­in­for­ma­tion, and mis­in­for­ma­tion. I ex­plain so­cial me­dia plat­forms’ ap­proach to non-COVID-19 mis­in­for­ma­tion through a mod­el of self-gov­er­nance that has un­til now ap­peared root­ed in American First Amendment val­ues. However, the tac­tics that the plat­forms have adopt­ed with re­spect to COVID-19 mis­in­for­ma­tion re­veal that their ap­proach to polic­ing ex­pres­sion is ac­tu­al­ly more akin to that found in Canada un­der ss. 1 and 2(b) of the Charter of Rights and Freedoms. When de­cid­ing to adopt new poli­cies in re­sponse to COVID-19, the plat­forms are per­form­ing a bal­anc­ing ex­er­cise rather than adopt­ing an ab­so­lutist ap­proach to free ex­pres­sion on their plat­forms.

I present the fac­tors that the plat­forms are like­ly weigh­ing dur­ing this bal­anc­ing ex­er­cise, draw­ing from the fac­tors that are rel­e­vant to a s. 1 analy­sis when the gov­ern­ment in­fringes a per­son’s right to free ex­pres­sion in Canada. The risk of harm ap­pears to play a promi­nent role in the plat­forms’ de­ci­sion-mak­ing process­es. Yet the risk of harm is present in non-COVID-19 mis­in­for­ma­tion as well. I ex­plore the con­cep­tion of harm that plat­forms might be adopt­ing in their bal­anc­ing ex­er­cise that would ex­plain their hands-off ap­proach un­til now.

Having made the de­ci­sion to po­lice this mis­in­for­ma­tion, the plat­forms are now faced with a task they have in the past claimed is in­sur­mount­able: cat­e­go­riz­ing claims on their plat­form as true or false—per­mis­si­ble or im­per­mis­si­ble. This in­volves val­ue-laden de­ci­sions about what en­ti­ties to con­sid­er au­thor­i­ta­tive, dis­tin­guish­ing claims of fact from opin­ion, and as­sess­ing the risk of harm. I will present their ap­proach to these is­sues.

Last, I con­sid­er whether Canada could re­quire plat­forms to po­lice COVID-19 mis­in­for­ma­tion or mis­in­for­ma­tion more broad­ly. While this would al­most cer­tain­ly not be an op­tion avail­able to US leg­is­la­tors, I con­clude that Canada’s free-ex­pres­sion ju­rispru­dence leaves open the pos­si­bil­i­ty for Canadian leg­is­la­tors to re­quire plat­forms to in­ter­vene—to delete con­tent—in times of dis­crete emer­gency.

COVID-19 Misinformation

I re­fer to the ex­pres­sion be­ing blocked un­der these poli­cies as mis­in­for­ma­tion. I adopt a definition of mis­in­for­ma­tion that as­sumes only the false­hood of the state­ment. Misinformation is broad­er than, but in­cludes, disin­for­ma­tion, which im­plies an in­ten­tion­al effort to de­ceive. This is a com­mon­ly ac­cept­ed dis­tinc­tion, adopt­ed by Renee DiResta and Facebook. Given how difficult it is for a plat­form to know the in­tent of the speak­er, they are nec­es­sar­i­ly dis­re­gard­ing in­tent in their treat­ment of COVID-19 mis­in­for­ma­tion. Misinformation is also different from hate speech. While hate speech can cer­tain­ly in­clude mis­in­for­ma­tion, the rea­son plat­forms tar­get hate speech is not be­cause it is false.

Platforms are cat­e­go­riz­ing COVID-19 mis­in­for­ma­tion based on its false­hood. This is a con­tent-based cat­e­go­riza­tion and is a nov­el as­pect to the plat­forms’ re­sponse to COVID-19 mis­in­for­ma­tion. Platforms have gen­er­al­ly not delet­ed ex­pres­sion based on its false­hood. There are two no­table ex­cep­tions. Pinterest pro­hibits “pro­mo­tion of false cures for ter­mi­nal or chron­ic ill­ness­es and anti-vac­ci­na­tion ad­vice.” And (only since February 2020) YouTube pro­hibits a small cat­e­go­ry of po­lit­i­cal-process mis­in­for­ma­tion. Platforms have in­stead re­lied on proxy cri­te­ria (like the iden­ti­ty of the ac­tor or pres­ence of de­cep­tive be­hav­iours) to delete like­ly disin­for­ma­tion. For ex­am­ple, Facebook has had lit­tle is­sue re­mov­ing posts at­trib­uted to “well-fund­ed mil­i­tary and in­tel­li­gence ap­pa­ra­tus” (e.g. Guccifer 2.0). And Twitter deletes fake ac­counts and bots in or­der to stamp out “faux-or­gan­ic” con­tent.

What is new about the plat­forms’ dele­tion of COVID-19 mis­in­for­ma­tion is that they are dis­tin­guish­ing be­tween true and false. Under these new poli­cies, it does not mat­ter who makes the false claim; it does not mat­ter whether the claim is part of a de­cep­tion cam­paign. Facebook and Twitter have nev­er be­fore delet­ed in­for­ma­tion be­cause it was false (YouTube has, and only since February 2020).

But in the con­text of a nov­el glob­al pan­dem­ic, full of scientific un­cer­tain­ty, what does it mean for a claim to be false? Twitter has said that it will “pri­or­i­tiz[e] re­mov­ing con­tent when it has a clear call to ac­tion that could di­rect­ly pose a risk to peo­ple’s health or well-be­ing” but that it will not “lim­it good faith dis­cus­sion or ex­press­ing hope about on­go­ing stud­ies re­lat­ed to po­ten­tial med­ical in­ter­ven­tions that show promise.” They will re­move “[d]enial of es­tab­lished scientific facts about trans­mis­sion dur­ing the in­cu­ba­tion pe­ri­od or trans­mis­sion guid­ance from glob­al and lo­cal health au­thor­i­ties.” Facebook will take down con­tent that is “prov­ably false”, that “has been flagged by a glob­al health ex­pert like the [World Health Organization (WHO)],” and “could lead to im­mi­nent harm.” YouTube will delete “any con­tent that dis­putes the ex­is­tence or trans­mis­sion of Covid-19, as de­scribed by the WHO and lo­cal health au­thor­i­ties.” But “[f]or bor­der­line con­tent that could mis­in­form users in harm­ful ways,” they only re­duce rec­om­men­da­tions of that con­tent.

In these pol­i­cy state­ments we see the plat­forms adopt­ing the fact–opin­ion dis­tinc­tion fa­mil­iar from defama­tion law. These state­ments also re­veal the harm-based mo­ti­va­tion for adopt­ing this more proac­tive ap­proach to COVID-19 mis­in­for­ma­tion. Third, they have all identified au­thor­i­ta­tive sources of the truth for the pur­pose of their poli­cies.

Non-COVID-19 Misinformation

These poli­cies are at odds with the plat­forms’ tra­di­tion­al ap­proach to mis­in­for­ma­tion. While plat­forms have had poli­cies against “gra­tu­itous vi­o­lence” and pornog­ra­phy that is in-line with ob­scen­i­ty norms in American me­dia, they have not at­tempt­ed to di­vide truth from fiction or fair from un­fair. Twitter took such a hands-off ap­proach that it was called “the free speech wing of the free speech par­ty.” Kate Klonick hy­poth­e­sizes that the “nor­ma­tive back­ground” of the ear­ly in-house coun­sel at these plat­forms was so in­fused with First Amendment doc­trine that they effectively im­port­ed it into Twitter, Facebook, and Youtube.

Where plat­forms have pro­hib­it­ed con­tent (e.g. ob­scen­i­ty, vi­o­lence, hate speech), they have been mo­ti­vat­ed by cor­po­rate iden­ti­ty and mon­e­tary in­ter­ests. Misinformation has been po­liced only by proxy: by tar­get­ing bad ac­tors or bad be­hav­iour. But plat­forms have avoid­ed as­sum­ing a role as ar­biters of truth. If a pri­vate in­di­vid­ual says some­thing open­ly us­ing their pri­ma­ry, per­son­al­ly-identified ac­count, no mat­ter how false or harm­ful the in­for­ma­tion might have been, the plat­forms have left it up. Only re­cent­ly have the plat­forms start­ed to po­lice mis­in­for­ma­tion qua mis­in­for­ma­tion and it has been with a light touch.

What fol­lows next is a sum­ma­ry of the (of­ten re­cent) high-wa­ter mark of each plat­form’s treat­ment of non-COVID-19 mis­in­for­ma­tion.

Facebook

“[F]alse news does not vi­o­late [Facebook’s] Community Standards.” Unless it vi­o­lates one of their oth­er con­tent poli­cies, Facebook will not take down mis­in­for­ma­tion. Their ap­proach to tack­ling mis­in­for­ma­tion (since 2016) has been to use fact-check­ers who are certified by the International Fact-Checking Network. When a fact-check­er judges con­tent to be false, Facebook will down­rank the con­tent so that many few­er peo­ple will see it, at­tach a warn­ing la­bel, and in­clude a link to cor­rec­tive in­for­ma­tion.

Twitter

In February, 2020, Twitter an­nounced a pol­i­cy tar­get­ing ma­nip­u­lat­ed me­dia (e.g. deep fakes). When such ma­nip­u­lat­ed me­dia risks cre­at­ing “se­ri­ous harm,” Twitter will delete it; oth­er­wise, Twitter will sim­ply la­bel it. While this can be con­strued as tar­get­ing con­tent, this is a spe­cial cat­e­go­ry of mis­in­for­ma­tion. The medi­um has been ma­nip­u­lat­ed or syn­thet­i­cal­ly gen­er­at­ed. Of course there is an im­plic­it claim that the per­son de­pict­ed in the pho­to, video, or au­dio ac­tu­al­ly did what they have been de­pict­ed do­ing (e.g. a video de­pict­ing a trust­wor­thy per­son X ad­vo­cat­ing dan­ger­ous be­hav­iour Y). In this sense, ma­nip­u­lat­ed me­dia makes a meta claim. Twitter’s ma­nip­u­lat­ed-me­dia pol­i­cy is the clos­est that Twitter has come to delet­ing claims be­cause they are false. However, Twitter’s poli­cies would not re­sult in dele­tion of a text-based ver­sion of the same claim (e.g. a tweet say­ing that “trust­wor­thy per­son X ad­vo­cat­ed dan­ger­ous be­hav­iour Y”). This re­veals that Twitter’s ma­nip­u­lat­ed-me­dia pol­i­cy re­quires more than mere false­hood and harm in or­der to trig­ger dele­tion. Their pol­i­cy to­ward ma­nip­u­lat­ed me­dia is based on the man­ner in which the claim is made and thus is more prop­er­ly cat­e­go­rized as a be­hav­iour-based dis­tinc­tion.

YouTube

In February, 2020, YouTube an­nounced sev­er­al cat­e­gories of con­tent that it would re­move: ma­nip­u­lat­ed me­dia that pos­es a “se­ri­ous risk of egre­gious harm,” mis­in­for­ma­tion about the “vot­ing or cen­sus process,” and “false claims re­lat­ed to the tech­ni­cal el­i­gi­bil­i­ty re­quire­ments for cur­rent po­lit­i­cal can­di­dates.” The ma­nip­u­lat­ed-me­dia cat­e­go­ry is much like that from Twitter’s pol­i­cy. However, the lat­ter two cat­e­gories (po­lit­i­cal-process mis­in­for­ma­tion) are rare ex­am­ples where a plat­form has de­cid­ed to re­move mis­in­for­ma­tion based pure­ly on it be­ing false. This is low-hang­ing fruit, giv­en that there is a clear ground truth about what the vot­ing and cen­sus process­es in­volve and what the tech­ni­cal el­i­gi­bil­i­ty re­quire­ments are.

Alternatives to deletion

Among ma­jor so­cial me­dia plat­forms, YouTube’s re­cent pol­i­cy re­lat­ing to elec­tion or cen­sus mis­in­for­ma­tion is the only oth­er ex­am­ple where false claims would be re­moved be­cause of their false­hood. The plat­forms all oth­er­wise re­sort to less in­tru­sive al­ter­na­tives: down­rank, flag or la­bel, and pro­mote more ac­cu­rate con­tent.

The harms of misinformation

Force, and fraud, are in war the two car­di­nal virtues.—Thomas Hobbes

Misinformation caus­es both so­ci­etal and in­di­vid­ual harm. At the so­ci­etal lev­el, mis­in­for­ma­tion can re­sult in “dis­tor­tion of de­mo­c­ra­t­ic dis­course,” “ma­nip­u­la­tion of elec­tions,” “ero­sion of trust in significant pub­lic and pri­vate in­sti­tu­tions,” “en­hance­ment and ex­ploita­tion of so­cial di­vi­sions,” and “threats to the econ­o­my.” Misinformation sows seeds of doubt, mis­trust, and a “cas­cade of cyn­i­cism” that spreads from par­tic­u­lar sources to me­dia and ex­perts in gen­er­al. Misinformation also cre­ates di­rect harms to in­di­vid­u­als. At the low end, this might be mis­in­for­ma­tion as in­nocu­ous as “keep­ing your cell­phone charged at 100% will ex­tend its bat­tery life.” But it also in­cludes much more con­se­quen­tial mis­in­for­ma­tion that can cause peo­ple to waste time and mon­ey on med­ical treat­ments that do noth­ing or even wors­en a prog­no­sis.

In this ar­ti­cle, I present a hy­poth­e­sis that the ma­jor so­cial me­dia plat­forms are op­er­at­ing un­der a Charter-like ap­proach to free ex­pres­sion that bal­ances free ex­pres­sion against these harms caused by mis­in­for­ma­tion. COVID-19 mis­in­for­ma­tion ap­par­ent­ly meets the plat­forms’ thresh­old for in­ter­ven­tion. This rais­es the fur­ther ques­tion: what oth­er mis­in­for­ma­tion might also meet this thresh­old?

YouTube has al­ready tar­get­ed a small cat­e­go­ry of de­mo­c­ra­t­ic-process mis­in­for­ma­tion. Protecting the in­tegri­ty of the po­lit­i­cal process from the harms of mis­in­for­ma­tion has also been rec­og­nized in Canada as a justification for in­fring­ing the right to free ex­pres­sion.

I’ve pre­sent­ed the gen­er­al harms of mis­in­for­ma­tion above, but I present next a small list of mis­in­for­ma­tion that is par­tic­u­lar­ly and demon­stra­bly harm­ful in di­verse ways.

Misleading health claims can give pa­tients and fam­i­lies false hope, they can lead pa­tients to forego effective treat­ment or to un­der­go in­ter­ven­tions that car­ry risk with no benefit. When the prop­er treat­ment has so­ci­etal benefits (vac­ci­na­tion, for ex­am­ple), fol­low­ing the mis­in­for­ma­tion leads to so­ci­etal harms.

Conspiracy the­o­ries are an­oth­er cat­e­go­ry of mis­in­for­ma­tion. These take ad­van­tage of a psy­cho­log­i­cal need to ex­plain, to con­trol, and to feel like you’re on the “in­side.” Conspiracy the­o­ries cer­tain­ly over­lap with the cat­e­go­ry of mis­lead­ing health claims. Some anti-vac­ci­na­tion the­o­ries in­volve a claim that vac­cines are dan­ger­ous yet rec­om­mend­ed be­cause of “se­cret and malev­o­lent forces.” Another ex­am­ple of over­lap is the con­spir­a­cy the­o­ry that HIV is not the cause of AIDS. But they also influence how peo­ple re­ceive in­for­ma­tion about so­cial­ly-im­por­tant is­sues like cli­mate change. And con­spir­a­cy the­o­ries are as­so­ci­at­ed with racist at­ti­tudes, al­though the di­rec­tion of cau­sa­tion has not been es­tab­lished.

Another kind of harm­ful mis­in­for­ma­tion may sim­ply be fraud. This in­cludes financial hoaxsters, mul­ti-lev­el mar­ket­ing schemes, and even the ped­dling of “or­ga­nized pseudole­gal com­mer­cial ar­gu­ments” (freemen-on-the-land or sov­er­eign-cit­i­zen the­o­ries). This lat­ter cat­e­go­ry has “proven dis­rup­tive, inflict[s] un­nec­es­sary ex­pens­es on oth­er par­ties, and [is] ul­ti­mate­ly harm­ful to the per­sons who ap­pear in court and at­tempt to in­voke these vex­a­tious strate­gies.”

Misinformation and its harm is as old as speech it­self, but on­line mis­in­for­ma­tion has ad­di­tion­al char­ac­ter­is­tics that in­crease the risk and ex­tent of that harm, in­clud­ing speed, vi­ral­i­ty, and anonymi­ty. Platforms may very well be able to de­crease the im­pact of mis­in­for­ma­tion by tar­get­ing these force mul­ti­pli­ers rather than the mis­in­for­ma­tion di­rect­ly.

An emerging balance

As non-gov­ern­ment en­ti­ties, so­cial me­dia plat­forms are not sub­ject to the con­straints of the First Amendment or s. 2(b) of the Charter. They in fact benefit from the right to free ex­pres­sion to se­cure their own free­dom to gov­ern ex­pres­sion on their plat­forms as they see fit. But, it is still re­veal­ing to ex­am­ine what free-ex­pres­sion val­ues are reflected in the poli­cies adopt­ed by so­cial me­dia plat­forms.

The plat­forms’ ac­tions un­til now have ap­peared to reflect lib­er­al First Amendment val­ues and a “re­duc­tion­ist” con­cep­tion of harm. Platforms’ proac­tive treat­ment of COVID-19 mis­in­for­ma­tion, in the ab­sence of gov­ern­ment reg­u­la­tion or ob­vi­ous mar­ket pres­sures, re­veals that the First Amendment isn’t the whole sto­ry. There is a com­mu­ni­tar­i­an or egal­i­tar­i­an as­pect to their self-gov­er­nance as well. Or if I am wrong about the mo­ti­va­tions, their ac­tions to­day at least re­veal that there is space for such a com­mu­ni­tar­i­an or egal­i­tar­i­an as­pect.

A First Amendment Perspective

The US Supreme Court has in­ter­pret­ed the First Amendment to only al­low con­tent-based re­stric­tions with­in a nar­row list of tra­di­tion­al­ly-rec­og­nized cat­e­gories: “in­cite­ment of im­mi­nent law­less ac­tion,” ob­scen­i­ty, defama­tion, child pornog­ra­phy, speech “in­te­gral to crim­i­nal con­duct,” fighting words, fraud, true threats, and “speech pre­sent­ing some grave an im­mi­nent threat the gov­ern­ment has the pow­er to pre­vent.” The Court has ex­plic­it­ly not­ed that “ab­sent from those few cat­e­gories... is any gen­er­al ex­cep­tion to the First Amendment for false state­ments”; “er­ro­neous state­ment is in­evitable in free de­bate.” The Court has, how­ev­er, placed less val­ue false state­ments, but al­ways in the con­text of some oth­er “legal­ly cog­niz­able harm.” Where the Court has al­lowed a con­tent-based dis­tinc­tion that would treat false state­ments differently than true state­ments, it has been in one of two con­texts: com­mer­cial speech or false­hood plus harm. And for non-com­mer­cial false speech, a rem­e­dy is only avail­able to a per­son harmed af­ter such harm has oc­curred: there are no pri­or re­straints on non-com­mer­cial false state­ments.

The United States Supreme Court has also “re­ject­ed as star­tling and dan­ger­ous a free-floating test for First Amendment cov­er­age based on an ad hoc bal­anc­ing of rel­a­tive so­cial costs and benefits.” Note that there is no ana­logue of Section 1 of the Charter in the US Constitution. Thus, the en­tire is­sue is gen­er­al­ly framed in terms of First Amendment “cov­er­age”: e.g. does the First Amendment cov­er ob­scen­i­ty? The Supreme Court of Canada in­stead has giv­en s. 2(b) of the Charter the broad­est pos­si­ble con­cep­tion, pro­tect­ing any at­tempt to con­vey mean­ing. (This does, how­ev­er, ex­cludes vi­o­lence and threats of vi­o­lence.) Whether some­thing like ob­scen­i­ty can be pro­hib­it­ed by the gov­ern­ment is framed in Canada as a ques­tion of whether the in­fringe­ment of the right to free ex­pres­sion can be justified un­der s. 1 of the Charter. This ex­er­cise is in­fused with a bal­anc­ing of “so­cial costs” and benefits.

Platforms have hewed close­ly to First Amendment ideals, al­though stray­ing slight­ly with re­spect to sev­er­al cat­e­gories of con­tent. These de­par­tures are best ex­plained as a re­sponse to for­eign reg­u­la­tion and mar­ket pres­sures rather than a rea­soned bal­anc­ing of harm and ex­pres­sion: “It’s no co­in­ci­dence that YouTube, Facebook, Twitter, and Microsoft, which earn sub­stan­tial por­tions of their rev­enues in Europe, ap­ply European hate-speech stan­dards glob­al­ly—even in coun­tries where that speech is le­gal.”

Until now though, plat­forms have re­sist­ed pres­sure to po­lice mis­in­for­ma­tion. Mark Zuckerberg has said:

[M]isin­for­ma­tion, I think is real­ly tricky... every­one would ba­si­cal­ly agree that you don’t want the con­tent that’s get­ting the most dis­tri­bu­tion to be flagrant hoax­es that are trick­ing peo­ple. But the oth­er side... is that a lot of peo­ple ex­press their life and their ex­pe­ri­ences by telling sto­ries, and some­times the sto­ries are true and some­times they’re not. And peo­ple use satire and they use fiction ... and the ques­tion is, how do you differentiate and draw the line be­tween satire or a fictional sto­ry? Where is the line?

But wher­ev­er that line might be, COVID-19 mis­in­for­ma­tion is across that line for these plat­forms. This re­veals that plat­forms are op­er­at­ing un­der a vi­sion clos­er to that from Canada’s Charter than that from the First Amendment.

A Charter Perspective

Some pre­vi­ous re­stric­tions of mis­in­for­ma­tion by the plat­forms may ap­pear at first as an adop­tion of a Charter per­spec­tive, but these pre­vi­ous re­stric­tions have been con­sis­tent with the nar­row ex­cep­tions that the US Supreme Court has carved out from First Amendment pro­tec­tion. For ex­am­ple, plat­forms have gen­er­al­ly not al­lowed mis­lead­ing or de­cep­tive ad­ver­tise­ment. This reflects a con­cep­tion of free ex­pres­sion that places less val­ue on com­mer­cial or profit-mo­ti­vat­ed speech.

But, the new poli­cies of Twitter, Facebook, Google, and YouTube with re­spect to COVID-19 mis­in­for­ma­tion amount to a pri­or re­straint (with­in each plat­form) on non-com­mer­cial mis­in­for­ma­tion, be­fore any legal­ly cog­niz­able harm has oc­curred. This is whol­ly in­con­sis­tent with a First Amendment-in­spired lens on the role of free ex­pres­sion. While pri­or con­straints are viewed with par­tic­u­lar skep­ti­cism by Canadian courts, they are not out of the ques­tion, and courts will con­sid­er the bal­ance of the “so­cial costs and benefits” that US law avoids.

As pre­sent­ed above, COVID-19 is just the lat­est sub­ject of mis­in­for­ma­tion, yet plat­forms have adopt­ed un­usu­al­ly proac­tive and strict mea­sures to delete it. When we look at the so­cial costs side of the ledger in Canadian ju­rispru­dence, we find a pletho­ra of fac­tors can ex­plain the plat­forms’ differential treat­ment of COVID-19 mis­in­for­ma­tion, par­tic­u­lar­ly in the s. 1 analy­sis. To jus­ti­fy an in­fringe­ment of a Charter right, s. 1 re­quires the gov­ern­ment to iden­ti­fy a press­ing and sub­stan­tial ob­jec­tive and to demon­strate that the means cho­sen to achieve that ob­jec­tive are pro­por­tion­al (fur­ther re­quir­ing that the gov­ern­ment demon­strate that the means are ra­tio­nal­ly con­nect­ed to the ob­jec­tive, that the means are min­i­mal­ly im­pair­ing, and that the dele­te­ri­ous effects do not out­weigh the salu­tary effects of the mea­sure).

At the high­est lev­el, a s. 1 analy­sis of in­fringe­ments of the right to free ex­pres­sion adopts a “com­mu­ni­tar­i­an un­der­stand­ing of the harms.” It mat­ters that the harms may be suffered by vul­ner­a­ble groups. Even the dis­sent in Irwin Toy (which would have struck down a pro­hi­bi­tion on ad­ver­tis­ing di­rect­ed at chil­dren) said that re­stric­tions could be justified for the “pro­tec­tion of the com­mu­ni­ty.” A “so­cial nui­sance” has been sufficient to jus­ti­fy a s. 2(b) in­fringe­ment. The Court has ap­proved of a re­stric­tion on ad­ver­tis­ing that was “like­ly to cre­ate an er­ro­neous im­pres­sion about the char­ac­ter­is­tics, health effects, or health haz­ard [of to­bac­co].” “Avoidance of harm to so­ci­ety” was an ob­jec­tive that justified crim­i­nal ob­scen­i­ty law. “[C]ourts must de­ter­mine as best they can what the com­mu­ni­ty would tol­er­ate oth­ers be­ing ex­posed to on the ba­sis of the de­gree of harm that may flow from such ex­po­sure.” To the ex­tent that there is un­cer­tain­ty about the harm­ful effects of mis­in­for­ma­tion, s. 1 ju­rispru­dence has not de­mand­ed ev­i­den­tial cer­tain­ty in or­der to jus­ti­fy an in­fringe­ment. The Court in R. v. Butler ex­plic­it­ly re­lied on “in­con­clu­sive so­cial sci­ence ev­i­dence.”

The ap­proach that plat­forms have tak­en to COVID-19 mis­in­for­ma­tion can be ex­plained through this s. 1 lens. Our col­lec­tive re­sponse to COVID-19 is root­ed in a com­mu­ni­tar­i­an un­der­stand­ing of its harms. We are not self-iso­lat­ing be­cause we may per­son­al­ly get sick; we are self-iso­lat­ing in or­der to deny the virus a vec­tor for trans­mis­sion. This is why plat­forms are com­bat­ting mis­in­for­ma­tion that would lead to peo­ple vi­o­lat­ing iso­la­tion guide­lines. Facebook has even gone so far to take down pages that or­ga­nize protests that “defy gov­ern­ment’s guid­ance on so­cial dis­tanc­ing.” It also ap­pears that these harms may be suffered more by vul­ner­a­ble groups. While there is dis­agree­ment and un­cer­tain­ty about fine de­tails and fore­casts, pub­lic health au­thor­i­ties are large­ly in agree­ment about the big pic­ture on COVID-19: it ex­ists, it is caused by a virus, it spreads through close hu­man–hu­man in­ter­ac­tion, and risk of spread is re­duced main­tain­ing phys­i­cal dis­tance, avoid­ing touch­ing one’s face, wash­ing one’s hands, and wear­ing a mask. This gives plat­forms a set of au­thor­i­ta­tive sources, de­vi­a­tion from which the com­mu­ni­ty is not will­ing to tol­er­ate. And our COVID-19 re­sponse has been likened to a war, which can elic­it a sta­tist def­er­ence to au­thor­i­ty. All of these fac­tors are evoca­tive of those that the Canadian gov­ern­ment has pre­vi­ous­ly re­lied upon (or that courts have al­lud­ed to) to jus­ti­fy in­fringe­ments of the right to free ex­pres­sion.

Despite the claims I’ve made in this sec­tion, I ac­knowl­edge there may be a more cyn­i­cal ex­pla­na­tion for the change in tack by the plat­forms to­wards COVID-19 dis­in­for­ma­tion. The lead­er­ship with­in these com­pa­nies may sim­ply feel more per­son­al­ly vul­ner­a­ble to the effects of COVID-19 mis­in­for­ma­tion. We see through­out his­to­ry that re­sponse to harm is of­ten de­layed un­til that harm is felt by the pow­er­ful. For ex­am­ple, with re­spect to pro­tec­tions against un­rea­son­able search and seizure, “[s]ocial at­ti­tudes to­ward va­grants as an un­wor­thy un­der­class de­layed re­sent­ment of such search­es un­til oth­er more es­teemed mem­bers of so­ci­ety were sub­ject­ed to them.”

A Fake News Act

It is one thing for a so­cial me­dia plat­form to vol­un­tar­i­ly adopt mea­sures to com­bat harm­ful mis­in­for­ma­tion, but could our gov­ern­ment di­rect­ly or in­di­rect­ly con­trol this speech? Such con­trol could take sev­er­al forms span­ning from a crim­i­nal offence that tar­gets speak­ers to reg­u­la­tion that is di­rect­ed at so­cial me­dia plat­forms.

The Criminal Code for­mer­ly made it a crime to spread false news:

Every one who wil­ful­ly pub­lish­es a state­ment, tale or news that he knows is false and that caus­es or is like­ly to cause in­jury or mis­chief to a pub­lic in­ter­est is guilty of an in­dictable offence and li­able to im­pris­on­ment for a term not ex­ceed­ing two years.

In R v. Zundel, the Supreme Court of Canada (split­ting 4–3) held that the false-news offence was un­con­sti­tu­tion­al. The ma­jor­i­ty (writ­ten by Justice McLachlin, as she then was) re-affirmed that s. 2(b) of the Charter pro­tects de­lib­er­ate lies. Deliberate lies are a form of ex­pres­sion. And it isn’t the case, as was ar­gued by the gov­ern­ment, that “de­lib­er­ate lies can nev­er have val­ue.” The ma­jor­i­ty also identified that it would be difficult to de­ter­mine with sufficient cer­tain­ty the mean­ing of a par­tic­u­lar ex­pres­sion and then de­ter­mine whether it is false.

Given that s. 181 in­fringed Zundel’s free­dom of ex­pres­sion, the ques­tion turned to whether the gov­ern­ment could jus­ti­fy that in­fringe­ment un­der s. 1 of the Charter.

The Court held that the gov­ern­ment failed to iden­ti­fy a press­ing and sub­stan­tial ob­jec­tive to which the false-news offence was di­rect­ed. They saw it as a hold-over from the Statute of Westminster in 1275, en­act­ed as part of Canada’s Criminal Code in 1892 with no ex­pla­na­tion. They found no ev­i­dence that Parliament had re­tained it through un­til to­day to ad­dress any par­tic­u­lar so­cial prob­lem. And even if they were to have ac­cept­ed one of the sub­mit­ted pur­pos­es, they would have found the offence to be over­broad and thus not min­i­mal­ly im­pair­ing. A par­tic­u­lar con­cern was the chill­ing effect cre­at­ed by the vague­ness and over­breadth of the offence. Finally, it was significant that this was a crim­i­nal offence, which de­mands a high­er de­gree of justification.

On April 15, 2020, Dominic LeBlanc, President of the Privy Council said that the gov­ern­ment is “con­sid­er­ing in­tro­duc­ing leg­is­la­tion to make it an offence to know­ing­ly spread mis­in­for­ma­tion that could harm peo­ple.” This seems like it would sim­ply be a re-en­act­ment of the un­con­sti­tu­tion­al s. 181.

But a new Fake News Act wouldn’t nec­es­sar­i­ly suffer the same fate as s. 181. Parliament could make clear the new pur­pose of the law, some­thing that was miss­ing in Zundel. They could confine its pe­ri­od of ap­pli­ca­tion to a par­tic­u­lar emer­gency. They could re­strict it to ap­ply to a nar­row cat­e­go­ry of specific claims in­stead of “any false news or tale where­by in­jury of mis­chief is or is like­ly to be oc­ca­sioned to any pub­lic in­ter­est.” And this would not need to be part of the Criminal Code. For ex­am­ple, it could em­pow­er the Canadian Radio-tele­vi­sion and Telecommunications Commission (CRTC) rather than pros­e­cu­tors. Social me­dia plat­forms con­tribute to the vi­ral spread of mis­in­for­ma­tion and thus to the mis­in­for­ma­tion’s harm. A Fake News Act could tar­get plat­forms rather than in­di­vid­ual speak­ers (who would be left free to make their false claims on per­son­al blogs).

From a fed­er­al­ism per­spec­tive, this would be with­in the ju­ris­dic­tion of the fed­er­al gov­ern­ment ei­ther through its con­trol of telecom­mu­ni­ca­tions and broad­cast­ing (if it tar­gets plat­forms), or it could be with­in the emer­gency-pow­ers branch of fed­er­al ju­ris­dic­tion over peace, or­der, and good gov­ern­ment.

While a Fake News Act like I just de­scribed would ab­solute­ly in­fringe the s. 2(b) right to free ex­pres­sion, it would have a bet­ter shot at justification un­der s. 1 than the for­mer Criminal Code s. 181. In or­der to be justified un­der s. 1, the in­fringe­ment would have to be a “rea­son­able lim­it[] pre­scribed by law as can be demon­stra­bly justified in a free and de­mo­c­ra­t­ic so­ci­ety.”

Before em­bark­ing on the heart of the s. 1 analy­sis, it is im­por­tant to char­ac­ter­ize the harm be­ing tar­get­ed and the na­ture of the right affected. This char­ac­ter­i­za­tion is crit­i­cal to a prop­er ap­pli­ca­tion of s. 1. The char­ac­ter­i­za­tion will in­form the pro­por­tion­al­i­ty prong of the Oakes test, but it will also tune the “mar­gin of ap­pre­ci­a­tion” that the leg­is­la­ture is due and set the ap­pro­pri­ate “stan­dard of justification” through­out the analy­sis.

Even if di­rect­ed at plat­forms, there are two pos­si­ble con­cep­tions of the right be­ing affected. One view is that this would be tar­get­ing the com­mer­cial or profit-mo­ti­vat­ed as­pect of speech that is not even the plat­form’s own ex­pres­sion. The Court would be like­ly to view this ex­pres­sion as low­er-val­ue ex­pres­sion and ap­ply the s. 1 analy­sis more le­nient­ly for the gov­ern­ment.

On the oth­er hand, the Court may not read much into the for­mal dis­tinc­tion that it is the plat­form be­ing tar­get­ed. In R. v. Guignard, there was a by­law that pro­hib­it­ed peo­ple from “ad­ver­tis­ing” us­ing signs (in­clud­ing counter-ad­ver­tis­ing) in cer­tain ar­eas of the city. The Court rec­og­nized that the by­law effectively re­strict­ed Guignard to “vir­tu­al­ly pri­vate com­mu­ni­ca­tions such as dis­trib­ut­ing leaflets in the neigh­bor­hood around his prop­er­ty.” A user whose speech is re­moved by a plat­form has the op­tion of post­ing on their per­son­al blog with still world­wide reach, but this might be akin to “vir­tu­al­ly pri­vate com­mu­ni­ca­tions such as dis­trib­ut­ing leaflets.” If con­tent is blocked from so­cial me­dia plat­forms, “then the con­tent would effectively not ex­ist.”

Much of the s. 1 analy­sis will de­pend on whether the re­stric­tions di­rect­ed at the plat­forms amount to mere­ly tak­ing away a mega­phone or re­strict­ing users to vir­tu­al­ly pri­vate com­mu­ni­ca­tions.

The Court would al­most cer­tain­ly ac­cept that the Act has a press­ing and sub­stan­tial ob­jec­tive: to com­bat the so­ci­ety-wide harms as­so­ci­at­ed with non-com­pli­ance caused by COVID-19 mis­in­for­ma­tion that is spread eas­i­ly through ma­jor so­cial me­dia plat­forms. These harms are pre­sent­ed above, and the Court has gen­er­al­ly adopt­ed a fair­ly per­mis­sive ap­proach at this stage of the s. 1 analy­sis.

The gov­ern­ment would also be like­ly able to demon­strate a ra­tio­nal con­nec­tion be­tween the reg­u­la­tion and the ob­jec­tive. It does seem straight­for­ward that di­rect­ing plat­forms to delete COVID-19 mis­in­for­ma­tion would re­duce be­lief in that mis­in­for­ma­tion. But, the effect of mis­in­for­ma­tion may not have as large a mag­ni­tude as is of­ten at­trib­uted to it, and ev­i­dence sug­gests that oth­er strate­gies like cor­rec­tive in­for­ma­tion or la­belling fake news may not yield in­tu­itive re­sults. I am not aware of any em­pir­i­cal ev­i­dence re­gard­ing the effectiveness of post-pub­li­ca­tion dele­tion of false claims, let alone in the con­text where this dele­tion is like­ly in­com­plete (posts will get through the cracks). Even in light of this so­cial sci­ence un­cer­tain­ty though, the Court has ac­cept­ed that “[t]he gov­ern­ment must show that it is rea­son­able to sup­pose that the lim­it may fur­ther the goal, not that it will do so.” “Where the court is faced with in­con­clu­sive or com­pet­ing so­cial sci­ence ev­i­dence re­lat­ing the harm to the leg­is­la­ture’s mea­sures, the court may rely on a rea­soned ap­pre­hen­sion of that harm.”

Whether a new Fake News Act would be viewed as min­i­mal­ly im­pair­ing would de­pend on how nar­row­ly the pro­hi­bi­tion is tai­lored and ap­plied in prac­tice. If it is time-lim­it­ed, re­strict­ed to specific, pre-identified COVID-19 mis­in­for­ma­tion (so as to not be over­broad), ap­ply­ing only to large so­cial me­dia plat­forms (so as to only re­move a mega­phone, not gag the speak­ers), it would be more like­ly to be seen as min­i­mal­ly im­pair­ing. It would also need to be demon­strat­ed that less­er al­ter­na­tives would not mean­ing­ful­ly achieve the gov­ern­ment’s ob­jec­tive. Platforms and the gov­ern­ment do have less­er al­ter­na­tives that they use for non-COVID-19 mis­in­for­ma­tion. This would cut against the gov­ern­ment’s as­ser­tion that dele­tion or­ders would be min­i­mal­ly im­pair­ing.

It would also have to be seen whether such a regime would be ad­min­is­tered in a man­ner that re­spects a nar­row tai­lor­ing. To de­ter­mine whether some­thing is a false claim, it “must be seen in its en­tire­ty, with close at­ten­tion to con­text, tone, and pur­pose. A work that may ap­pear to be [false (orig­i­nal­ly, ob­scen­i­ty)] may in fact be a bit­ing po­lit­i­cal satire or cri­tique.”

The final bal­anc­ing of the salu­tary and dele­te­ri­ous effects of a Fake News Act will de­pend on its form and on the Court’s con­cep­tion of the speech that it is tar­get­ing. If it tar­gets plat­forms through the CRTC, it may be seen as significantly less dele­te­ri­ous than the pre­vi­ous s. 181 of the Criminal Code, es­pe­cial­ly if the op­por­tu­ni­ty to pub­lish else­where is seen as a mean­ing­ful al­ter­na­tive for speak­ers. But if al­ter­na­tive av­enues for speech ac­tu­al­ly are mean­ing­ful al­ter­na­tives for speak­ers, this may de­crease the Act’s salu­tary effects, since the ex­pres­sion would still be en­ter­ing the world.

A Fake News Act may be able to strike this bal­ance. But giv­en the fine line that such an act would have to toe, it is un­like­ly that out­side of a dis­crete emer­gency with identifiable and im­mi­nent­ly harm­ful mis­in­for­ma­tion re­quir­ing sup­pres­sion (as op­posed to less­er al­ter­na­tives like flagging) would our gov­ern­ment be able to con­trol mis­in­for­ma­tion through leg­is­la­tion. While I have pre­sent­ed this as a pos­si­bil­i­ty, I ad­mit it is hard to imag­ine just what ex­pres­sion might meet this hur­dle.

Conclusion

Platforms have ap­pro­pri­ate­ly shift­ed to­wards a Charter-like per­spec­tive on free ex­pres­sion, re­vealed most acute­ly in their de­ci­sion to delete COVID-19 mis­in­for­ma­tion. If plat­forms view as a se­ri­ous re­spon­si­bil­i­ty the task of bal­anc­ing free ex­pres­sion against harm, this im­plies that more ar­eas of mis­in­for­ma­tion may soon be sub­ject to plat­form over­sight, es­pe­cial­ly false and harm­ful health claims. And, Parliament may even be able to re­quire such plat­form over­sight in dis­crete emer­gen­cies where there is identifiable and im­mi­nent­ly harm­ful mis­in­for­ma­tion.