The Monday news from Russia carried accounts of bloggers and small businesses “relishing” their final moments scrolling through Instagram on Sunday. March 13, 2022, was Instagram’s last day in Russia, as Moscow asked the photo and video-sharing social networking service to shut shop after its parent company Meta Inc. turned a blind eye to calls for violence against Russians on its platforms. The country also opened a criminal case against the company.
The shutdown of Instagram was “wrong,” affecting 80 million Russians, wrote Head Instagram, Adam Mosseri on Twitter.
On Friday, March 11, Moscow announced to block Instagram in the country after Meta a day before said it would allow calls for violence against Vladimir Putin and Russian soldiers involved in the military invasion of Ukraine on its social media platforms.
In an interview, Meta called the measure temporary and designed to preserve voice and expression for people who are facing invasion. “As always, we are prohibiting calls for violence against Russians outside of the narrow context of the current invasion,” said a Meta spokesperson.
On Monday, the company tightened its content moderation policy to restrict calls for the death of a head of state, according to Reuters.
Not just a bone of contention between Russia and Meta, hate speech is a challenge for social media companies globally, apart from fake news, cyber harassment and misuse of data. Hate speech affects every component of modern society, from governments and communities to companies and individuals. With the laws made stringent over the years, platforms like Facebook, Instagram, YouTube, Twitter and the likes have developed automated detection tools to increasingly report flagged and/or removed content.
According to a UNESCO report, Addressing hate speech on social media: Contemporary challenges, between January and March 2021, YouTube removed 85,247 videos that violated their hate speech policy. Its two previous reports show similar figures. For the same quarter, Facebook reported a total of 25.2 million pieces of content actioned, whilst Instagram reported 6.3 million pieces of content. According to Twitter’s last transparency report, the company removed 1,628,281 pieces of content deemed to violate their hate speech policy between July and December 2020.
However, many social posts still have led to arrests in India.
In January 2022, a human rights activist and former militant, Ahsan Untoo, was arrested in Srinagar, for “inciting violence through social media platforms”. Untoo, a resident of Kupwara district, is chairman of the International Forum for Justice and Human Rights Jammu and Kashmir.
Former Supreme Court judge Justice Madan Lokur in February 2022 expressed a strong opinion on rampant hate speech in India and criticised the government for not doing enough, while he said some people of the government are taking part in hate speeches actively.
Hate Speech on Social Media: The India Case
In March 2018, the Observer Research Foundation (ORF) released a study based on a statistical mapping of hate speech and counter-speech on social media pages in India. Beginning July 2016, the study collected data from public pages in two separate month-long time periods spread over 12 months. The research revealed that religion and ‘religion-cultural’ practices related to food and dress were the most explicit basis for hate as expressed in Indian social media. The cases increased from 19 to 30 percent over the one-year timeframe of the study. Most comments inciting bodily harm or violence were against Muslims, with subjects ranging from opposition to interfaith marriage between Hindus and Muslims to positions on universal human rights and the issues like cow protection and beef consumption. The indicative study showed that a growing segment of users precisely uses social media platforms to provoke violence.
Such cases are global. Gavin McInnes, a leader of the “alt-right” white nationalist movement, punched a protester at Donald Trump’s inaugural events and then bragged about it online. His Twitter account is now suspended.
With no respite in sight from such cases, it is important for Indian society to move to stringent laws and better algorithms, as the number of social media users rise in India. Easy internet access has already pushed the number of social media users in India. In 2020, it stood at 518 million and is expected to be almost 1.5 billion in 2040. Facebook remained the popular choice among the social media platforms as of 2020.
The increasing cases of hate speech reported across the globe have pushed governments and civil society organisations (CSOs) to seek report from social media companies allowing such comments. For about two years, Indian and international rights organisations have waited for Meta Inc. to release the Human Rights Impact Assessment (HRIA) on India report.
On January 19, 2022, a consortium of 25 CSOs published a letter sent to Facebook on January 3, 2022, asking about the overdue release of the HRIA report and inaction towards addressing serious matters about its human rights record in India. The same day, Real Facebook Oversight Board, emergency response to ongoing harms on Meta’s platforms with global scholars, experts and advocates, and India Civil Watch International and Citizens for Justice and Peace (CJP) called a virtual press briefing. CJP Secretary Teesta Setalvad, Delhi Minorities Commission former chairman Dr Zafarul Khan, Facebook former vice president Brian Boland and whistle-blowers Sophie Zhang and Frances Haugen were the speakers. The host of the briefing, Kyle Taylor called the role of the company in human rights issues and unrest in countries around the world to be deeply troubling. “India is at the top of that list, where the use of Facebook’s [Meta’s] platforms including WhatsApp have sparked a great amount of violence,” he said.
In her address, Setalvad underlined the hate against minority groups, as witnessed in cases of the call for communal genocide in December 2021, attacks on Christians around Christmas and the auction of Muslim women on social media apps. Citing many interactions between CJP and Facebook India in the last three years, she said the organisation received a lukewarm response to the complaints made. “Within this overall scenario, Facebook India as a platform is playing a dangerous role. It has a vast clientele of over 460 million users in English and 22 Indian languages and allows unchecked inflammatory content that has become an instrument for targeting minorities, Dalits and women,” she added.
The whistle-blowers and CSOs shared that Meta Inc. commissioned law firm Foley Hoag to conduct the HRIA in 2020 but did not release the findings. Moreover, a recent article blamed the company to be “stifling” the publication of the report. The letter said, while a lot of this type of content “violates community standards,” it is still not removed from the site, “partly because Facebook has not allocated nearly enough resources to tackling the problem.”
Human Rights Impact Assessments
In a series of three articles, the researchers of Ikigai Law have tried to capture the significance of human rights due diligence (HRDDs) and impact assessments of technology companies with vast reach. In one of the articles, it said that Facebook’s HRIA report of Sri Lanka notes rampant incidents of gender-based hate speech, cyber-bullying and fake news. The investigation revealed that the incidents may have been promoted by Facebook’s previous algorithms that were designed to drive engagement, without analysing authenticity or intent. In this way, the human rights violations can pose business, financial, legal and reputational risks for tech companies, the researchers added.
A Human Rights Impact Assessment helps address these risks.
An evidence-based process that studies how businesses affect the local community, workers and consumers, an HRIA can reveal less obvious situations of human rights impact that a company may have had, say Ikigai Law researchers. A prime example of this is Facebook’s HRIAs in Indonesia, Sri Lanka, Cambodia and Myanmar undertook to understand the platform’s influence and role in these countries. The studies aimed to identify whether the company motivated or facilitated any harm and failed to conduct human rights due diligence. Similarly, Vodafone’s HRIA identified areas of human rights “hot spots” – the risks and opportunities in areas of human rights, like labour, civil and political, and cultural rights, and others that are relevant to its business – that the company should be watchful about.
HRIAs help businesses review their approach to human rights and refresh corporate governance systems. Like, following the Sri Lankan HRIA, Facebook put in place many measures, such as appointing a policy manager in the country, for improved corporate accountability on human rights. In the case of Vodafone, the HRIA encouraged the company to strengthen its HRDDs for entering new markets, whether by itself or through business partners.
Collective Efforts Needed
UN Secretary-General António Guterres in September 2021 blamed growing polarization and intolerance to be fuelling a rise in hate speech worldwide, and suggested education to counter this “global firestorm”.
In a 10-page paper released last year, UNESCO suggested ways to overcome hate speech, like using automated detection tools, extra-legal responses and preventive interventions. Today, actions taken by social media platforms include removing material when it is judged to be hate speech and sending warnings to users posting hateful speech, restricting their activity on the platform or banning them. These community standards are constantly evolving, particularly in how much they rely on automated versus human moderation methods, says the UN cultural agency.
A similar study, published in the journal Proceedings of the ACM on Human-Computer Interaction, said that banning right-wing extremists from social media can reduce the spread of anti-social ideas and conspiracy theories. “Removing someone from a platform is an extreme step that should not be taken lightly,” said lead author Shagun Jhaver, an assistant professor in the Department of Library and Information Science at Rutgers-New Brunswick. “However, platforms have rules for appropriate behaviour, and when a site member breaks those rules repeatedly, the platform needs to take action. The toxicity created by influencers and their supporters who promote offensive speech can also silence and harm vulnerable user groups, making it crucial for platforms to attend to such influencers’ activities.”
Another way to beat hate speech is empathy-based counter-speech, which researchers, based on a social media field experiment, say, can reduce racist hate speech. Unlike content moderation, counter-speech does not seek to suppress free expression but instead promises to reduce hate through the persuasion of the perpetrator.
While government-led institutions and technology companies are building better systems to beat hate speech, a major part of responsibility lies with consumers posting, accepting and commenting on these posts.
Individually, we must hold the platforms accountable for hate speech. Report tweets, YouTube videos, Instagram posts, Facebook posts and other content that spreads hate to social media platforms. One must also raise awareness of the problem by talking to friends and family about why hate speech is not a problem only for the internet, but societies and culture in general.
An important way to curb hate speech is by supporting people who are targets of hate speech. One must counter harmful messages in public places and show solidarity with the victims. Being tolerant and boosting positive messages of tolerance also help. Last, but not least, notifying organisations fighting against hate whenever you come across a case and tracking hate and countering it are important steps to scrap the malpractice completely.