News

Published July 28, 2022

In a 3rd test, Facebook still fails to block hate speech

The ads, which the groups submitted both in English and in Swahili, spoke of beheadings, rape and bloodshed

By Barbara Ortutay via The Associated Press

Facebook is letting violent hate speech slip through its controls in Kenya as it has in other countries, according to a new report from the nonprofit groups Global Witness and Foxglove.

It is the third such test of Facebook's ability to detect hateful language — either via artificial intelligence or human moderators — that the groups have run, and that the company has failed.

The ads, which the groups submitted both in English and in Swahili, spoke of beheadings, rape and bloodshed. They compared people to donkeys and goats. Some also included profanity and grammatical errors. The Swahili language ads easily made it through Facebook's detection systems and were approved for publication.

As for the English ads, some were rejected at first, but only because they contained profanities and mistakes in addition to hate speech. Once the profanities were removed and grammar errors fixed, however, the ads — still calling for killings and containing obvious hate speech — went through without a hitch.

“We were surprised to see that our ads had for the first time been flagged, but they hadn’t been flagged for the much more important reasons that we expected them to be," said Nienke Palstra, senior campaigner at London-based Global Witness.

The ads were never posted to Facebook. But the fact that they easily could have been shows that despite repeated assurances that it would do better, Facebook parent Meta still appears to regularly fail to detect hate speech and calls for violence on its platform.

Global Witness said it reached out to Meta after its ads were accepted for publication but did not receive a response. On Thursday, however, Global Witness said it did receive a response earlier in July but it was lost in a spam folder. Meta also confirmed Thursday it sent a response.

“We’ve taken extensive steps to help us catch hate speech and inflammatory content in Kenya, and we’re intensifying these efforts ahead of the election. We have dedicated teams of Swahili speakers and proactive detection technology to help us remove harmful content quickly and at scale,” Meta said in a statement. "Despite these efforts, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That’s why we have teams closely monitoring the situation and addressing these errors as quickly as possible.”

Each time Global Witness has submitted ads with blatant hate speech to see if Facebook’s systems would catch it, the company failed to do so. In Myanmar, one of the ads used a slur to refer to people of east Indian or Muslim origin and call for their killing. In Ethiopia, the ads used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans.

Why ads and not regular posts? That's because Meta claims to hold advertisements to an “even stricter” standard than regular, unpaid posts, according to its help center page for paid advertisements.

Meta has consistently refused to say how many content moderators it has in countries where English is not the primary language. This includes moderators in Kenya, Myanmar and other regions where material posted on the company’s platforms has been linked to real-world violence.

Kenya is readying for a national election in August. On July 20, Meta posted a detailed blog post on how it is preparing for the country's election, including establishing an “operations center” and removing harmful content.

“In the six months leading up to April 30, 2022, we took action on more than 37,000 pieces of content for violating our Hate Speech policies on Facebook and Instagram in Kenya. During that same period, we also took action on more than 42,000 pieces of content that violated our Violence & Incitement policies," wrote Mercy Ndegwa, director of public policy in East & Horn of Africa.

Global Witness said it resubmitted two of its ads, one in English and one in Swahili, after Meta published its blog post to see if anything has changed. Once again, the ads went through.

“If you’re not catching these 20 ads, this 37,000 number that you are celebrating, that is probably the tip of the iceberg. You have to think that there’s a lot that’s (slipping through) your filter," Palstra said.

The Global Witness report follows a separate study from June that found that Facebook has failed to catch Islamic State group and al-Shabab extremist content in posts aimed at East Africa. The region remains under threat from violent attacks as Kenya prepares to vote.

Banner image: The Facebook app is shown in the app store on a smart phone in Surfside, Fla., on April 23, 2021. According to a new report from the nonprofit groups Global Witness and Foxglove, Facebook is letting violent hate speech slip through its controls in Kenya as it has in other countries. It is the third such test of Facebook's ability to detect hateful language — either via artificial intelligence or human moderators — that the groups have run, and that the company has failed. (AP Photo/Wilfredo Lee, File)

What do you think of this article?
+1
0
+1
0
+1
0
+1
0
+1
0
Advertisement
Advertisement
Advertisement

Have a breaking story?

Share it with us!
Share Your Story

What’s Barrie talking about?

From breaking news to the best slice of pizza in town! Get everything Barrie’s talking about delivered right to your inbox. Don’t worry, we won’t spam you. We promise :)
Subscription Form
Consent Info

By submitting this form, you are consenting to receive marketing emails from: Central Ontario Broadcasting, 431 Huronia Rd, Barrie, Ontario, CA, https://www.cobroadcasting.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Related Stories

Advertisement
Advertisement