Skip to main content

A Muslim advocacy group just sued Facebook for failing to remove hate-speech, and it's the latest example of the tech's patchwork polices that fail to crack down on Islamophobia (FB)

Mark Zuckerberg sheryl sandbergDrew Angerer/Getty Images; The Asahi Shimbun/Getty Images

Summary List Placement

A Muslim advocacy group this week sued Facebook for failing to curtail hate speech, part of tech's broader problem stopping Islamophobic speech.

Civil rights group Muslim Advocates filed a suit against Facebook and four company executives in the District of Columbia Superior Court for lying to Congress about moderating hate speech.

Facebook executives have told Congress of their commitment to removing content that violate policies, including COO Sheryl Sandberg's assertion to the Senate Intelligence Committee on Facebook and Foreign Influence that "when content violates our policies, we will take it down."

Yet Muslim Advocates said the organization presented Facebook with a list of 26 groups that spread anti-Muslim hate in 2017, yet 19 of them are still active.

Read more: In a hopeful sign for tech diversity, Harlem Capital raised $134 million for its second fund, blowing by its target in just 6 months

The suit claims Facebook allowed a man threatening to kill Congresswoman Ilhan Omar to post "violent and racist content for years," and that the company failed to remove a group called "Death to Murdering Islamic Muslim Cult Members" even after Elon University Professor Megan Squire brought it to Facebook's attention.

"We do not allow hate speech on Facebook and regularly work with experts, non-profits, and stakeholders to help make sure Facebook is a safe place for everyone, recognizing anti-Muslim rhetoric can take different forms," a Facebook spokesperson said in a statement to Insider. "We have invested in AI technologies to take down hate speech, and we proactively detect 97 percent of what we remove."

In 2018, Facebook CEO Mark Zuckerberg testified to Congress that the platform's can fail to police hate speech due to its artificial intelligence. Hate speech has nuance that can be tricky for AI to identify and remove, especially in different languages, Zuckerberg said. 

Zuckerberg once again addressed questions about moderation and automation at a March 2021 congressional hearing about misinformation. His testimony about how content moderation needs to take into consideration "nuances," like when advocates make counterarguments against hateful hashtags, seemed at odds with Facebook's admitted reliance on AI to do the job of moderating hate speech.

Peter Romer-Friedman, a principal at Gupta Wessler PLLC who helped file the suit and the former counsel to Sen. Edward M. Kennedy, said Congress cannot adequately oversee corporations that misrepresent facts to lawmakers. 

Romer-Friedman said Facebook's failure to remove a group that claimed "Islam is a disease" — which directly violates the company's hate speech policies that prohibits "dehumanizing speech including...reference or comparison to filth, bacteria, disease, or feces" — is an example where the firm did not follow through on its promise to Congress to quell hate speech.

"It's become all too common for corporate execs to come to Washington and not tell the truth, and that harms the ability of Congress to understand the problem and fairly regulate businesses that are inherently unsafe," Romer-Friedman told Insider.

How Facebook and other tech firms are failing to address anti-Muslim hate speech

The suit highlight's tech firms' ongoing problem responding to anti-Muslim content online.

Rep. Omar, the first congressperson to wear a hijab or Muslim headscarf, called on Twitter to address the death threats she receives. "Yo @Twitter this is unacceptable!" she said in 2019.

An analysis by the Social Science Research Council analyzed more than 100,000 tweets directed at Muslim candidates running for office in 2018, and found Twitter was responsible "for the spread of images and words from a small number of influential voices to a national and international audience," per The Washington Post.

The spread of anti-Muslim content extends far beyond Facebook and Twitter:

  • TikTok apologized to a 17-year-old user for suspending her account condemning China's mass crackdown on Uighur Muslims. 
  • VICE has reported Muslim prayer apps like Muslim Pro had been selling location data on users to the US military.
  • Instagram banned Laura Loomer, a "proud Islamophobe," years after Uber and Lyft banned her for a series of anti-Muslim tweets following a terror attack in New York.

Sanaa Ansari, a staff attorney with Muslim Advocates, said there's been "clear evidence" of incitement to violence against Muslims potentially due to unchecked hate speech on social media. In 2019, 16-minute livestream of a gunman attacking two mosques and killing 51 people in New Zealand was uploaded to Facebook and spread quickly to Youtube, Instagram, and Twitter.

"There have been multiple calls to arms to Muslims, there have been organized events by anti-Muslim supremacists and militias who have organized marches, protests at mosques in this country," Ansari told Insider in an interview. "And that's just the tip of the iceberg."

NOW WATCH: What would happen if you jumped off the International Space Station

See Also:

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.