by John Parker


University of Michigan associate professor Libbey Hemphill recently urged social media platforms to “extend beyond civility” in their hate speech moderation efforts.

In her article “To Truly Target Hate Speech, Moderation Must Extend Beyond Civility,” Hemphill holds up a machine learning program she co-created as a better way to detect hate speech.

“Platforms claim content moderation at scale is too difficult and expensive,” Hemphill writes, “but our team detected white supremacist speech with affordable tools available to most researchers–much less expensive than those available to platforms.”

Hemphill is the Associate Director for the University of Michigan School of Information Center for Social Media Responsibility.

The “machine learning” technology created by Hemphill and her team detects language on social media sites that may be attributed to white supremacy.

“We set out to teach algorithms to distinguish white supremacist speech from general speech on social media,” she explained.

After testing the system on social media platforms, including Stormfront, Reddit, and Twitter, Hemphill concluded that social media platforms should look out for specific trends and topics rather than specific terms.

“White supremacists talked frequently about white decline, conspiracy theories about Jews and Jewish power and pro-Trump messaging,” Hemphill said. “The specific topics they discussed changed, but these broader grievances did not. Automated detection systems should look for these themes rather than specific terms.”

According to Hemphill, the current social media standards to deflect “hate speech” is inadequate. She claimed that it does not detect all speech that could be labeled as harassment.

In her words, social media platforms should prioritize “justice and equity” over “politeness,” arguing, “Prioritizing civility online has not only allowed civil but hateful speech to thrive and it normalizes white supremacy.”

“Civility be damned,” she concluded.

She further explains that “white supremacy” takes form on social media to “weaponize civility against marginalized groups.” In her terms, white supremacists refrain from using “profane” language. Instead, they express hate by excluding marginalized groups.

One example she gave is prefixing the word “white” to nouns.

“White supremacists, for example, frequently center their whiteness by appending white to many terms (white children, white women, the white race),” she said.

Hemphill acknowledged that it is important to moderate online content in order to maintain a safe space for people.

“Once white supremacists enter online spaces — as with offline ones — they threaten the safety of already marginalized groups and their ability to participate in public life,” she said.

The article was published shortly after Tesla CEO Elon Musk offered to buy Twitter. Twitter is renowned for its political discord and quickly became the center of free speech controversy.

Notably, Twitter removed Donald Trump’s account from its platform and censored information regarding Hunter Biden’s laptop during the 2020 election.

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” Musk said.

The university’s Center for Social Media Responsibility was formed in 2018 by a former Obama staffer.

Former Executive Director and current Michigan Lieutenant Governor Garlin Gilchrist II said at the time that the Center’s responsibility is to “deal with [the] ongoing threat of more difficult-to-understand and potential misinformation.”

– – –

John Parker is a New York Campus Correspondent at
Photo “Associate Professor Libbey Hemphill” by University of Michigan and “University of Michigan” is by University of Michigan.



Appeared at and reprinted from