Online Hate Speech: Special Rapporteur reports on contemporary forms of racism, racial discrimination, xenophobia and related intolerance
By Staff Writer
The United Nations’ Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance has submitted her report to the Human Rights Council. The detailed report (A/78/538) of 20 pages was released on October 18, and explores many aspects of online hate speech.
The Special Rapporteur, Ashwini K.P in her report on hate speech focuses on global trends and international human rights law standards. The report acknowledges the interconnectedness of digital technologies and human rights issues. It includes an analysis of the manifestations and consequences of online racist hate speech, an overview of the international legal framework, and the responsibilities of States and companies. It also discusses the challenges faced in preventing and addressing online racist hate speech. The report concludes with conclusions and recommendations for States and other stakeholders to address this issue.
The Special Rapporteur on racism, racial discrimination, xenophobia, and related intolerance was appointed by the Human Rights Council in October 2022. She has attended several international events and conferences, including the Permanent Forum on People of African Descent in Geneva, the eleventh Forum on Business and Human Rights, the eleventh national conference on non-discrimination in Malaysia, and the 9th annual meeting of the Group of Independent Eminent Experts on the Implementation of the Durban Declaration and Programme of Action.
The rise of digital platforms, including social media, has significantly impacted daily life and information sharing. Over half of the world’s population uses these platforms, but they can also perpetuate societal inequities. Social media platforms allow the dissemination of content, including racist hate speech, which can have severe consequences. The Special Rapporteur defines online racist hate speech, discusses its manifestations, and analyzes actors involved in its dissemination. She expresses concern about the rapid spread of such hate speech.
Defining online racist hate speech
The Rapporteur states that, whilst noting the lack of a precise and internationally agreed definition of online hate speech within human rights law treaties, the Special Rapporteur draws on elements in the UN Strategy and Action Plan on hate speech and other international standards to suggest a working definition of online racist hate speech for the purpose of this report.
“The Special Rapporteur finds the way in which hate speech is understood within the UN Strategy and Action Plan helpful. The Strategy and Action Plan understands hate speech as “any kind of communication in speech, writing or behaviour that attacks or uses pejorative or discriminatory language with a reference to a person or a group on the basis of who they are, in other words based on their religion, ethnicity, race, colour, descent, gender or other identify factor”. The Special Rapporteur also refers to the Committee on the Elimination of Racial Discrimination’s General recommendation No.35 on Combating racist hate speech. The General recommendation clarifies that hate speech can include all the specific speech forms referred to article 4 of the Convention on the Elimination of All Forms of Racial Discrimination and can be directed towards all groups protected under article 1 of the Convention. To define the most serious forms of racist hate speech the Special Rapporteur refers to article 20 of the International Covenant on Civil and Political Rights, which prohibits any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence. The Special Rapporteur interprets these elements as applying within an online context.”
Manifestation of online racist hate speech
Online racist hate speech can manifest in various forms, including racism based on race, color, descent, national origin, ethnic origin, or religion. It targets various groups, including people of African descent, indigenous peoples, migrants, Asian individuals, Jewish communities, Muslim communities, oppressed castes, indigenous peoples, Roma persons, and Palestinian persons. Online hate speech is often interconnected with other forms based on gender, LGBTI status, and disability. It occurs in the context of online communication and the sharing of materials on digital platforms, including racist imagery, racially discriminatory comments, and the dissemination of racist conspiracy theories. It often dehumanizes people and scapegoats them for societal problems. Manifestations of online racist hate speech can occur in various online spaces, including social media, chat forums, and online gaming environments.
Online racist hate speech and disinformation, misinformation
Online racist hate speech is linked to the spread of misinformation and disinformation. The Special Rapporteur acknowledges the lack of definitions for these concepts within international human rights law and the challenges in defining them. Disinformation is false information that causes serious social harm, while misinformation is unknowingly spread. However, the Special Rapporteur emphasizes the nexus between online racist hate speech and disinformation, particularly in online contexts, which can facilitate rapid dissemination.
The rapid spread and vast reach
The rapid spread of online racist hate speech is largely due to the ease and widespread nature of sharing materials on digital and social media platforms. The anonymity of those sharing can lead to the spread of such content, which can be highly superficial and difficult for users to fact-check. Content shaping algorithms, which collect large amounts of user data and monetarize it for advertisers, prioritize the dissemination of materials that generate high engagement, perpetuating harmful beliefs and narratives. These algorithms also create social media “echo chambers” where users are shown material that reinforces pre-existing views and beliefs, deepening harmful racial stereotypes and spreading hate speech. These “echo chambers” limit exposure to counter speech that could challenge harmful beliefs and narratives.
Multiple actors and motivations
Online racist hate speech is a complex and interplay of actors motivated by racist, ethno-nationalist, and xenophobic ideologies. It is often disseminated through digital platforms, where individuals and groups can organize and recruit new members, furthering the dissemination of hate speech. The Special Rapporteur reports that digital platforms have provided groups with effective platforms for spreading messages, organizing events, and raising money. Online racist hate speech is not only originated by ideological motivations but also by actors seeking to exploit societal divisions and fear for personal and political enrichment. Prominent politicians and academics have used online platforms to express racist and xenophobic sentiments, aiming to gain political capital.
“Online racist hate speech can be used to target those who run for office and/or express dissenting views, such as academics and human rights defenders fighting racism and racial discrimination, including those working directly on combatting online racist hate speech. Targeted online hate campaigns against such figures can discredit them and have a chilling effect on others due to fear of similar treatment therefore protecting existing political power structures, which often exclude those from racial and ethnic groups.”
Real life consequences
Online racist hate speech can have severe consequences, including incitement to discrimination, hostility, or violence, as defined in international human rights standards. Incitement can be expressed or implied through actions such as displays of racist symbols or distribution of materials and words. One example is the sustained demonization of the Rohingya ethnic group in Myanmar on Facebook, which led to horrific humanitarian consequences. Even when not incitement to discrimination, hate crimes can still be influenced by online hate speech. Digital platforms can facilitate the global transmission of harmful stereotypes and propaganda, potentially making violence against targeted groups more acceptable. The negative impact on individuals and groups targeted by online racist hate speech is significant, with chronic stress and decreased self-esteem, lower academic or professional performance, and increased rates of alcohol and drug use. The Special Rapporteur is concerned about the online targeting of children and young people from racial and ethnic groups, including in the context of bullying.
Online racist hate speech and international human rights standards
Article 4 of the Convention on the Elimination of all forms of Racial Discrimination is a central article that outlines the obligation of States to address incitement to racial discrimination. It emphasizes the need for immediate and positive measures to eradicate all incitement to racial discrimination, including acts of violence or incitement against any race or group of persons of another color or ethnic origin. State parties must declare an offense punishable by law for the dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, acts of violence or incitement to such acts against any race or group of persons of another color or ethnic origin, and the provision of any assistance to racist activities, including the financing thereof. The Committee on the Elimination of Racial Discrimination’s General recommendation No.35 emphasizes the importance of effective implementation of legal provisions, typically achieved through investigation of offences and prosecution. The Durban Declaration and Programme of Action reinforces the relevant provisions of the Convention, affirming its universal adherence and full implementation for promoting equality and non-discrimination. It also recognizes the rapidly evolving phenomenon of hate speech and racist material dissemination through new information and communications technologies and calls for a prompt and coordinated international response.
The International Covenant on Civil and Political Rights outlines the responsibilities of State parties to prevent and address online racist hate speech. Article 20 (2) prohibits any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence. Article 2 (1) and article 26 of the Covenant provide a general right of equality, a prohibition of discrimination, and an obligation to take positive measures against discrimination. Article 19 protects the right to freedom of opinion and expression, rejecting the notion that preventing and addressing online racist hate speech is a zero-sum game. Equality, non-discrimination, and fundamental freedoms are cornerstones of international human rights law, democratic governance, and the rule of law.
Article 19 (3) requires any restrictions to the right to be outlined in law, pursue a legitimate aim, be necessary and proportionate in their scope. The Human Rights Committee’s General Recommendation No.31 and the Special Rapporteur on freedom of opinion and expression have reinforced that measures to restrict freedom of opinion and expression must meet the criteria of legality, necessity, proportionality, and legitimacy. Article 18 protects the right to freedom of thought, conscience, religion, or belief, with limitations prescribed by law to protect public safety, order, health, morals, or the fundamental rights and freedoms of others.
Deep societal drivers of online racist hate speech
Online racist hate speech is a growing issue that is not isolated but is influenced by various societal trends. These include economic inequality, political dissatisfaction, decline of counter-speech, the rise of digital platforms, declining trust in public institutions, and weaknesses in public information systems. These trends interact with societal issues of racism and racial discrimination, which are often rooted in colonialism and slavery. Systemic racism, which involves a complex system of laws, policies, practices, and attitudes, can lead to discrimination based on race, color, descent, or national or ethnic origin. Digital technologies, governed in a “race neutral” or color-blind” manner, can compound existing societal inequities, making racial and ethnic groups particularly vulnerable to online racist hate speech. The phenomenon can harm society and contribute to the degradation of the social fabric. Addressing these issues is challenging due to the complexity and depth of global crises and their bidirectional relationship with online racist hate speech. Efforts by states and stakeholders to prevent and address online racist hate speech without considering these contextual drivers are less likely to be effective.
Conclusion
The Report concludes that online racist hate speech is a global issue causing serious consequences for racial and ethnic groups. It creates a climate of racial hatred, destroys community social fabric, and undermines human rights and democracy norms. Multistakeholder approaches, grounded in international human rights norms, are urgently needed to prevent and address this issue. States, companies, civil society organizations, national human rights institutions, and individuals all play a crucial role in this process. The Special Rapporteur recommends measures for effective prevention and resolution.
However, whatever be the means of propagation of hate speech, either online or offline, States enacting special laws to criminalize online activity should be seen as reactionary as such state action would pose serious danger to online activity, almost leading to abuse of such laws to target critical journalists, social media activists and political opponents. The idea of criminalizing online hate speech as seperate from offline hate speech is misconceived.
[Featured image: Dan Priyasad, ultra-nationalist Sinhala racist, who is infamous for his racist provocative hate speech]