Subscribe
Cassius Life Featured Video
CLOSE

https://twitter.com/onekade/status/880071958548828161

What kind of corporate culture would train employees to answer “C” to the question above? No, it’s not UBER, the LAPD or the Trump administration — it’s Facebook. The social media giant created a complex algorithm to figure out how to fairly manage the massive amount of content its 2 billion users post each day.

Facebook needed to create a formula to quickly differentiate appropriate self-expression from harmful hate speech. The guidelines are used to decide what posts get censored and what users get suspended. But a report from Julia Angwin of ProPublica, which included internal documents like the training question listed above, has revealed some of the serious biases that are programmed into Facebook. Biases that lead to the absurd censorship decisions like the one listed in the study’s opening paragraph:

“In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

At first, the difference between how administrators handled Rep. Higgins and Delgado’s posts seems as unfair as how police officers respond to boys who look like Tamir Rice versus men who look like Dylann Roof. But when you understand why the correct answer to the “who do we protect” question is symbolized by a photo of the Backstreet Boys, it makes as little sense in the digital world as it does in the physical one. Until you remember that Facebook was founded in America, where Rice was murdered on-sight at age 12 for playing with a toy gun while Roof was peacefully detained and fed Burger King hours after murdering eight Black people in a church.

According to Angwin’s report, Facebook’s “global hate speech algorithm” splits content into “protected” and “unprotected” categories. “Content reviewers” check posts to make sure they don’t generally offend a group in the “protected” category, which includes “sex, race, religious affiliation, ethnicity, national origin, sexual orientation, gender identity or serious disability or disease.” Violators are at the mercy of moderators who can delete posts and suspend accounts at their own discretion.

At the same time, Facebook’s reviewers are trained to allow posts that specifically target “social class, occupation, continental origin, political ideology, appearance, religion, age or country.” So since white men are protected under the “race” and “sex” categories, anything that could be deemed generally offensive to white men could be flagged. But although “women” are protected under the “sex” subset, “drivers” are not because they fall into the “occupation” subset. So any post about women drivers is fair game. Same goes for Black children — age is not protected, even though race is.

The takeaway of the study for bigots: be more specific. Like Rep. Higgins, who qualified his violent anti-Muslim post with the term “radicalized.” The President’s countless examples of cyber-bullying and hate speech show he’s already hip to the game. Trump’s most recent Twitter attack on Morning Joe co-host Mika Brzezinski reeks of the same misogynist energy that inspired his famous “locker room talk” with Billy Bush — but none of it can be directly tied to one of the objective subsets of abuse Facebook has laid out in its policy. How convenient.

A month before the 2016 presidential election, the Wall Street Journal reported that founder and CEO Mark Zuckerburg made an executive order pardoning Trump’s posts about his proposed Muslim ban despite the objections of some within the company who argued that his comments clearly violated its terms of use.

Facebook’s censorship policies would obviously violate the first amendment’s protection of free speech if the global corporation were a U.S state instead of a website. The company’s choice to allow posts that deny the Holocaust contradicts its supposed interest in protecting users from hateful content. But as a growing business and not a sovereign nation, Facebook isn’t required to treat its users as citizens — they are customers whose interest are prioritized by the company’s bottom line.

A 1996 law (section 230 of the Telecommunications Act) frees tech companies like Facebook of responsibility for the content its users’ posts. In the years since its passing, Google, YouTube and Facebook have all used federally-granted immunity to toe the lines of digital correctness and leverage its individual growth.

In response to ProPublica, Monika Bickert, head of global policy management at Facebook copped a familiar plea, blaming, “The reality of having policies that apply to a global community.” She’s right about the difficulties that come with a job, “where people around the world are going to have very different ideas about what is OK to share.” But the only acceptable response is working to fix the biased system, not complaining about the realities that make it necessary.

But just like the boys in blue, Facebook appears to be hiding behind its flawed system instead of accepting criticism and seeking genuine solutions. The collateral damage of its biases will seem trivial until it contributes to a mistake as tragic as Tamir Rice’s murder. Maybe then “Black children” will be as worthy of protection in America and online. But probably not.