Facebook might be banned in Kenya if it doesn’t eliminate hate speech

NCIC, Kenya’s ethnic cohesion watchdog, has ordered Facebook to curb the propagation of hate speech on its site within seven days or risk a suspension in the East African nation if it does not comply.

In response to a study by Global Witness and Foxglove, a legal non-profit business, Facebook has been accused of failing to identify hate speech advertisements. In the lead-up to the country’s general elections, this development has emerged.

Facebook’s parent company Meta was reluctant to delete and prevent offensive information that exacerbated an already tense political climate according to the Global Witness study. Now, the NCIC has issued a one-week deadline for Meta to boost moderation before, during, and after the next elections; if it doesn’t comply, it will be banned from the nation.

“Facebook is breaking our country’s laws. NCIC commissioner Danvas Makori remarked, “They have let themselves to be a vector of hate speech and provocation, misinformation and deception.”

During the 2020 U.S. elections, organisations including Global Witness and Foxglove urged Meta to adopt “break glass” tactics, such as halting political advertisements, in order to stop disinformation and civil unrest from erupting online.

Facebook’s artificial intelligence algorithms are unable to identify threats of violence.

Global Witness uploaded 20 advertisements calling for violence and beheadings in English and Swahili to test Facebook’s claim that its AI algorithms can identify hate speech. All except one of the ads were allowed. Ads, according to the human rights organisation, are subject to a more rigorous review and approval procedure than postings. Ads might potentially be taken down by the Facebook team before they got up.

The advertisements we uploaded all violate Facebook’s community standards, including hate speech and ethnic-based appeals to violence. Global Witness claimed in a statement that “much of the speech was demeaning, equating certain tribes to animals and asking for rape, killing, and decapitation.”

In light of the results, Global Witness’s Ava Lee stated, “Facebook has the potential to create or destroy democracies, yet we’ve seen the business choose money above people time and time again.”

Despite its claims that it had improved its protocols and increased its resources before to the Kenyan election, we were dismayed to see that it was still authorising overt calls for ethnic bloodshed. This is not a one-time occurrence. In the previous several months, we’ve also witnessed similar incapacity to work in Myanmar and Ethiopia. Inaction by Facebook in the wake of the Kenyan election and other impending elections across the globe, from Brazil to the US midterms, may have disastrous results.

Global Witness wants Facebook to step up content control, among other things.

As a reaction, the social media giant claims it is putting resources into people and technology to combat disinformation and dangerous material.

Content reviewers in more than 70 languages, including Swahili, have been employed by the company, according to a statement. On Facebook and Instagram, the firm reported removing over 37,000 pieces of material that violated its policy on hate speech and another 42,000 items that promoted violence and incitement.

Meta said it is also working closely with civic stakeholders, such as election commissions and civil society groups, to examine “how Facebook and Instagram can be a good instrument for civic involvement and the actions they can take to be secure while using our platforms.”

Similarly, other social media platforms, such as Twitter and TikTok, have come under fire for not taking a more aggressive role in policing material and preventing the spread of hate speech.

Study by Mozilla Foundation discovered TikTok to be a source of misinformation in Kenya only last month. This determination was reached after Mozilla examined more than 130 highly viewed videos containing material including hate speech, political misinformation, and incitement, which contradicted the TikTok policy against hate speech.

Mozilla found that in the instance of TikTok, its content moderators were unfamiliar with the political environment of the nation, which allowed for the propagation of misinformation on the social media platform.

A growing number of lawmakers and civilians alike are calling for social media networks to impose stronger controls in the lead-up to the August 9 elections.