Hate Speech And Its Regulation In The Digital Age
Written By Minji Kang
Spring 2022 Symposium
Introduction to the First Amendment of the US Constitution
The First Amendment of the US Constitution provides that the "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”[1] This command safeguards governmental suppression of viewpoints expressed in the marketplace of ideas. It largely stems from John Stuart Mill's economic analogy of speech in his 1859 publication.[2] Asserting that no one alone knows the truth, or that no one idea alone embodies either the truth or its antithesis, Mill claims that the free competition of ideas is the best way to separate falsehoods from fact and ideas left untested will slip into dogma.[3] The First Amendment protects speech "critical to the advancement of knowledge, the transformation of tastes, political change, cultural expression, and other purposes and consequences of constitutionally protected speech."[4] The question of what constitutes an abridgment of freedom of speech has been a recurring debate among legal scholars, with countless cases brought to trial every year.
Unless one adopts an absolutist view of the First Amendment, leaving the interpretation of free speech up to the government, one must grapple with whether certain forms of speech can or cannot be protected. Hate speech, like any other form of speech, is a highly controversial form of expression that generally receives First Amendment protection. Most recently, in Matal v Tam, The United State Supreme Court reaffirmed that hate speech receives First Amendment protection.[5] The protection, however, is not absolute, and if hate speech falls into the categories of incitement to imminent lawless action, fighting words, defamation, and true threats, the proscription is deemed proper. Compared to the US legal system that values and proactively defends freedom of speech, European countries, including France and Germany, ban some forms of hate speech which reflects a lack of universal consensus on the definition of hate speech. In the age of social media, the borderless cyberspace acts as a platform of polarization and proliferates animosity which requires appropriate guidelines and enforceable measures to compensate for the existing legal discrepancies in treating hate speech. The following examines the role of the First Amendment doctrine in regulating the modes of communication in cyberspace in comparison to international laws. In doing so, I argue that the existing US Constitution doctrine requires revision to accommodate the fastly changing realities of cyberspace.
Hate Speech on Social Media
Although there is a lack of consensus in defining the terminology, hate speech is generally referred to as “any form of expression through which speakers intend to vilify, humiliate, or incite hatred against a group or a class of persons on the basis of race, religion, skin color sexual identity, gender identity, ethnicity, disability, or national origin.”[6] With the rise in the number of social media users, the harms of online hate speech have become undeniable. Companies have adjusted their policies accordingly. In the case of Twitter, it announced to proscribe dehumanizing content of discernible groups citing the link between online hate speech and increased violence.[7] Social media acts as a polarization medium, exacerbating partisan animosity through the creation of echo chambers and the reproduction of similar content. Users are able to customize their filters and select content, followers, and followed accounts. The individually tailored content exposure leaves a higher risk of confirmation bias that ultimately reinforces the user's own viewpoint while muting opposing ideologies.[8] Extremist groups such as ISIS take advantage of the algorithms to recruit followers susceptible to the normalization of violence in the lack of moderate views.[9] With hate groups capitalizing on the echo chamber effect of social media that leads to an increased threat of lawlessness, it is of utmost importance to reflect on how different nations deal with regulating hate speech online.
The US Constitution on Hate Speech
The First Amendment, made applicable to the states through the Due Process Clause of the Fourteenth Amendment, promises the protection of speech. However, this protection is subject to various exceptions in the form of categories of speech that are exempt from that protection where imminent incitement to lawlessness, true threats, and fighting words cannot be considered protected speech. First Amendment protection is not absolute considering the time, place, and manner of speech restriction that the government can control. The context of the speech can determine the degrees to which it receives First Amendment protection. Generally, if a speech seeks to address matters of public concern that relate to the well-being of the community, it receives greater protection on the "legitimate interest" basis.[10]
Elonis v. United States
Even with the many precedents delineating the boundaries between protected and unprotected speech, confusion remains. With the growth of social media, where more adults are using some form of social media, including Facebook, Twitter, and Linkedin, the lines between professional and personal conduct have often been blurred. The Elonis case in 2015 was the first Supreme Court Case that tested the constitutionality of criminalizing potential threats in a social media context.[11] The Court's decision not only has significant implications for the First Amendment law but also offers a window into the American legal system that puts considerable weight into the argument for freedom of speech.
In Elonis v. The United States, the Supreme Court examined the First Amendment Protection of a true threat posted online. The petitioner of this case, Anthony Douglas Elonis, was convicted under 18 U. S. C. §875(c), which criminalizes the transmission of threats in interstate commerce for posting threats to injure his estranged wife, his coworkers, the police, a kindergarten class, and a Federal Bureau of Investigation agent on Facebook.[12] He posted a series of self-styled rap lyrics with graphic content, which the district court construed as a "true threat" that falls outside of the scope of First Amendment protections. Elonis was convicted on four of the five counts with a sentence of three years of eight months imprisonment and three years supervised release. Elonis renewed his jury instruction challenge on appeal. The U.S. Court of Appeals for the Third Circuit affirmed Elonis' conviction and held that a subjective intent standard would fail to protect individuals from the fear of violence, which the "true threat" exception was created to prevent.[13]
The petitioner once again appealed, and the Supreme Court dealt with whether a conviction of threatening another person under 18 U. S. C. §875(c) would require evidence of the defendant's subjective intent of threat. In an 8-1 majority opinion delivered by Justice Roberts, the Court held that the objective intent to harm is not enough to convict Elonis of a true threat. It reasoned that a mere communication does not make the conduct of speech wrongful and that an objective reasonable person standard can be misleading in separating innocent, accidental conduct from purposeful, intended acts. Hence, an objective standard would risk criminalizing an innocent actor who had no knowledge of the facts that made their conduct blameworthy. The Court's interpretation of the case under 18 U.S.C.S. § 875(c) demonstrates the heavy reliance on freedom of speech protection. The crucial element of the Court's decision came down to separating legal innocence from wrongful conduct based on the mental state requirement, thus defending communication that lacks the subjective intent to cause harm. This accordingly weighs in favor of the speaker's mindset more than the receiver's interpretation of the threat.
The First Amendment's limitations manifest in an age where the modes of communication transcend physical space, geographical location, and time-limited audience. It is built upon antiquated presumptions made within its foundation. Counter speech, which is a common justification for the US protection of speech, is unlikely to be present within the echo chamber of like-minded individuals. Moreover, the inability to discern the precise intent of online speech deters any online form of communication from being construed as a real threat. Because of its particular character, including its publication on the internet, it does not have an immediate effect of intimidation, thus broadly protected by the amendment even when the victims of the speech are terrorized.[14] Distinct from subjective or objective intent to threaten, fear is a social evil that the state must protect against, and it should be made clear that the internet is not like any other form of media. Furthermore, the existing predetermined categorization of speeches, including true threats, incitement, defamation, and fighting words, is not conducive in accounting for the implications of online communication that may fall into the gray areas of the First Amendment doctrine.
International Law on Hate Speech
Like the US, international law recognizes the importance and the right to freedom of opinion through a number of international and regional human rights instruments, including the Universal Declaration of Human Rights as well as the International Covenant on Civil and Political Rights (ICCPR) Article 19. However, in regulating hate speech, the ICCPR also places an obligation on States to proscribe hate speech under Article 20(2). It provides that "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”[15] Most recently, in 2018, the United Nations called on multinational media platform companies to align their speech codes with standards enshrined in the international human rights law. While it may face swift community guideline revisions in the future due to the recent acquisition by Elon Musk, Twitter has historically observed the values of international human rights laws, even so far as to de-platforming the former President of the United States.[16] The CEO Jack Dorsey reasoned that ultimately it was the right decision for the company to focus on keeping public safety even in the face of extraordinary consequences.[17]
Yahoo! Inc. v. La Ligue Contre Le Racisme Et L’antisemitisme (LICRA)
Similar to the international human rights law that regulates hate speech, France's hate speech doctrine outlaw any communication which intends to incite discrimination against, hatred of, or harm to, anyone because of his belonging or not belonging.[18] It is especially strict on speech related to the Holocaust, banning Nazi symbols and punishing the denial of the history of antisemitism. The Gayssot Act provides restrictions on freedom of expression by making it an offense to speak or write, denying the existence of the Holocaust or gas asphyxiation of Jews in gas chambers by Nazis.[19] Yahoo! Inc. v. La Ligue Contre Le Racisme Et L'antisémitisme examined the role of speech regulation in borderless cyberspace when the laws of two nations conflict. In reconciling the French criminal provisions against antisemitism and United States First Amendment protection, the United States Court of Appeals examined whether Yahoo!, a California-based website, was at fault for allowing the mass circulation of Nazi symbols and Nazi-related auctions.[20]
Yahoo!, an American Internet service provider, brought suit in federal district court in diversity against La Ligue or LICRA (International League Against Racism and Anti-Semitism), seeking a declaratory judgment that two interim orders by a French court were unenforceable. Yahoo! argued that the French orders are not enforceable in the United States because they would violate the freedom of expression provision in the U.S. Constitution's First Amendment. Yahoo!'s United States website is written in English. It targets users in the United States and relies on servers located in California. Yahoo!'s foreign subsidiaries, such as Yahoo! France, Yahoo! U.K., and Yahoo! India, have comparable websites for their respective countries. Foreign-based websites provide content in the local language, target local citizens, and adopt policies that comply with local laws and customs. But in reality, when a user from the US wants to search for the French Yahoo! page, they can type fr.yahoo.com to find the information.
The issue here derives from a massive circulation of Nazi symbols and Nazi specified auctions rampant on the Yahoo! France website. French criminal law prohibits the very appearance of such materials on a webpage that, while hosted in California, would be available to Yahoo! users in France. Accordingly, the French Court made two interim orders on Yahoo!. The first interim order was to remove the "negationists" index in Yahoo! France. Yahoo! objected, asserting "there was no technical solution which would enable it to comply fully with the terms of the court order."[21] The second interim order reaffirmed the first and directed Yahoo! to comply within three months, or else they would be charged with a penalty of 100,000 francs each day. Yahoo! France once again objected, contending it did "comply in large measure with the spirit and letter of the first order,” hence the penalty is not applicable.[22]
This case brought to the United States Court of Appeal did not focus on testing the legality of allowing mass circulation of Nazi symbols. Instead, the Ninth Circuit examined whether it is consistent with the Constitution and laws of the United States for another nation, France, to regulate speech by a United States resident within the United States on the basis that such speech can be accessed by Internet users in that nation. In a world where ideas and information transcend borders, the Internet renders the physical distance between speaker and audience virtually meaningless. The implications of this question go far beyond the facts of this case. The Court ruled that it is difficult to know whether enforcements of the French Court's interim orders would breach the First Amendment rights of Yahoo! Ultimately, the Court reversed the district court's decision and remanded with instructions to dismiss the case.
This case is significant because it exemplifies how different nations strike a balance between protecting freedom of speech while regulating offensive language that may influence the receivers of the speech. Under the French criminal law, internet service providers are forbidden to permit French users to have access to materials prohibited by the French government.[23] In Yahoo!’s argument, the First Amendment rights prevail even if it violates the French criminal laws against racism, xenophobia, and antisemitism. The Court rejected such argument, reasoning that the First Amendment of Yahoo! may not have been violated at all as there is a lack of evidence on how exactly the French orders influenced the freedom of speech by merely restricting access to anti-semitic materials by Internet users located in France. Additionally, the Court also recognized that the criminal statutes of most nations are not compatible with the U.S. Constitution. Hence, no unfettered authority can pass such critical judgment, especially when the criminal statute embodies the will of a foreign sovereign in protecting its citizens from morally reprehensible speech.
First Amendment Doctrine Implications
When different laws are in conflict, it is essential to recognize the context in which the laws are applicable. First Amendment speech doctrine has been tested in countless trials throughout American legal history. Its application, however, with the advancement of technology, has been at the center of heated debate. The central issue of the First Amendment Speech doctrine lies in the inability of the Courts to ascertain the immediate and prolonged impact of hate speech due to the nature of online communication that does not involve face-to-face interaction. Even if the speech contained fighting words, it requires an in-person interaction to have the same effect that the Constitution deems unlawful. The true threat doctrine also falls short in the context of identifying the direct threat, as online hate speech can indirectly target a general group of people as well as specific individuals. An allegedly threatening speech online should be proven to hold a serious expression of an intent to commit violence to be punished, but there are challenges in ascertaining such intent when the communication is delivered online. Furthermore, attempts to fit speech into the incitement to imminent lawless action category also fails as cyberspace transcends geographic locations and time. If a foreign-based internet user tried to pose a threat to another internet user in the United States, there is an inevitable difficulty in measuring the immediate impacts. Courts generally do not regard online hate speech to cause “an imminent lawless action,” making it almost impossible to hold the perpetrators of hate speech accountable for their actions.[24] Therefore while a post can directly cause incitement or threaten individuals, the imminence of the incitement is obscured by these aforementioned challenges.
Conclusion
In a digital age, where algorithms incessantly replicate echo chambers, disperse misinformation, and give rise to unhealthy online communication, the existing First Amendment doctrine protecting freedom of speech does not suffice in effectively adapting to various scenarios and contexts. As evident in Elonis, online hate speech does not fall into any of the categories excluded from the First Amendment protection, making the United States an outlier in constitutionally protecting hate speech, unlike most countries. The inability of the First Amendment doctrine to effectively tackle hate speech has led to national outcry to amend the existing doctrine in favor of banning this speech. The limitations of the doctrine incapable of protecting social media users from receiving malicious threats demonstrate a need to amend the doctrine so that it can appropriately combat and control the detrimental effects of hate speech dispersed through online communication.
References
[1] U.S. Const. Amend. 1.
[2] David Schultz and David Hudson. Marketplace of Ideas, MTSU (2017), https://www.mtsu.edu/first-amendment/article/999/marketplace-of-ideas.
[3] Jeffrey Dvorkin. Trusting the News in a Digital Age toward a "New" News Literacy, Wiley-Blackwell (2021).
[4] U.S. Const. Amend. 1.
[5] Matal v. Tam, 137 S. Ct. 1744.
[6] American Library Association. Hate Speech and Hate Crime. ALA (2017), http://www.ala.org/advocacy/intfreedom/hate.
[7] Twitter. Twitter's policy on hateful conduct. Twitter (April 28, 2022), https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy.
[8] Damon Centola. Why Social Media Makes Us More Polarized and How to Fix It. Scientific American (October 15, 2020), https://www.scientificamerican.com/article/why-social-media-makes-us-more-polarized-and-how-to-fix-it/.
[9] Antonia Ward. ISIS's Social Media Use Poses a Threat to Stability in the Middle East and Africa. RAND Corporation (December 11, 2018), https://www.rand.org/blog/2018/12/isiss-use-of-social-media-still-poses-a-threat-to-stability.html.
[10] ICO. Legitimate Interests. ICO. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/lawful-basis-for-processing/legitimate-interests/#:~:text=The%20legitimate%20interests%20can%20be,The%20processing%20must%20be%20necessary.
[11] Elonis v. United States, 138 S. Ct. 67 (2017).
[12] 18 U.S.C.S. § 875.
[13] Elonis v. United States, 138 S. Ct. 67 (2017).
[14] William Funk, Intimidation and the Internet. 110 Penn St. L. Rev. 579 (2006).
[15] Office of the High Commissioner for Human Rights. Towards an interpretation of Article 20 of the ICCPR, OHCHR (2010), https://www.ohchr.org/sites/default/files/Documents/Issues/Expression/ICCPR/Vienna/CRP7Callamard.pdf.
[16] Clare Duffy. Elon Musk to Buy Twitter in $44 Billion Deal, CNN (April 26, 2022), https://www.cnn.com/2022/04/25/tech/elon-musk-twitter-sale-agreement/index.html.
[17] Elizabeth Dwoskin. Twitter CEO Jack Dorsey Said the Trump Ban Reflected 'a Failure' to Police Online Discourse, WASH POST (January 14, 2021), https://www.washingtonpost.com/technology/2021/01/13/twitter-trump-ban/.
[18] Press Freedom Act of 29 July 1881, Legislationline (2014), https://www.legislationline.org/documents/id/15730.
[19] Robert Faurisson v. France, Communication No. 550/1993, U.N. Doc. CCPR/C/58/D/550/1993 (1996).
[20] Christine Duh. Yahoo! Inc. v. LICRA, 17 Berkeley Technol. Law J. 359 (2002).
[21] Yahoo, Inc. v. La Ligue Contre Le Racisme et L'Antisemitisme, 433 F.3d 1199 (9th Cir. 2007) (en banc).
[22] Id.
[23] Elissa A. Okoniewski. Yahoo!, Inc. v. LICRA: The French Challenge to Free Expression on the Internet, 18 Am. Univ. Int. Law Rev. 295 (2002).
[24] U.S. Const. Amend.1 .