Real-world events spark online hate against unrelated groups, according to a study

<span>Photograph: Michael Kemp/Alamy</span>” src=”–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Nw–/ ” data-src=”–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Nw–/ /777d6ca66a0ee0fd706533d2432a8786″/></div>
<p><figcaption class=Photograph: Michael Kemp/Alamy

Real-world events such as assassinations and political protests can trigger an increase in online hate speech directed against seemingly unrelated groups. The finding could help online moderators better predict when hateful content is most likely to be posted and what they should look out for, say the researchers.

Previous research had linked offline events to subsequent spikes in hate speech and violent hate crimes, but these studies have largely focused on moderated platforms, such as Twitter and Facebook (now Meta), which have policies in place to identify and remove this type of content.

To better understand the drivers and the relationship between traditional and less moderate platforms, Prof. Yonatan Lupu of George Washington University in Washington DC and his colleagues used a machine learning tool to examine conversations among users of 1,150 online hate communities posted between June 2019 and December 2020. Some of these communities were on Facebook, Instagram and VKontakte. Others were on the less moderate platforms Gab, Telegram and 4Chan.

The study, which was published in PLOS ONE, found that offline events such as elections, assassinations and protests could trigger huge spikes in hate speech activity online.

There was a direct relationship between the event and the type of hate content it triggered, but not always. The assassination of Iranian general Qassem Suleimani in early 2020 prompted an increase in Islamophobic and anti-Semitic content in the following days.

The largest spike in hate speech linked to the murder of George Floyd and the Black Lives Matter protests it triggered. Hate speech related to race increased by 250% after these events, but there was also a more general wave of hate online.

“An interesting thing about this particular event is that the increase [in race-related hate speech] it lasted,” Lupu said. “Even as late as 2022, the frequency with which people use racist hate speech about these communities has not returned to what it was before George Floyd was murdered.

“The other interesting thing is that it also seemed to trigger various other forms of hate speech online, where the connection to what’s happening offline isn’t as clear.”

For example, hate speech targeting gender identity and sexual orientation – a topic with an unintuitive connection to murder and protests – increased by 75%. Gender-based and anti-Semitic hate speech has also increased, as has content related to nationalism and ethnicity.

The research was unable to prove causation, but its findings suggest a more complex relationship between triggering events and online hate speech than previously assumed.

One factor may be the extent of media coverage of the events in question. “Both the volume and variety of online reactions to offline events depend, in part, on the prominence of those events in other media,” Lupu said.

He suspects, however, that this isn’t the only factor. “We can’t say for sure, but I think there’s something about the way hate is constructed right now in English-speaking societies, such that racism is at its core. When racism kicks in – if it kicks in hard enough – then it goes spewing in all directions.”

Catriona Scholes, director of insight at anti-extremism tech firm Moonshot, said she has noticed a similar pattern related to anti-Semitic hate speech.

For example, protests against a planned drag storytime event in Columbus, Ohio in December resulted in an increase in anti-LGBTQ+ hatred as well as increased threats and hostility towards the Jewish community.

“There is potential to leverage this type of data to go from being reactive, to being proactive in protecting individuals and communities,” Scholes said.

Lupu said content moderation teams on mainstream platforms should monitor fringe platforms for emerging trends. “What happens on 4Chan doesn’t stay on 4Chan. If they’re talking about something on 4Chan, it’s going to come up on Facebook. He also suggests that content moderation teams should think about what’s happening in the news and what it could be triggering, to try and prepare their response.

An especially important question for future research is investigating what other kinds of offline events might be followed by large and indiscriminate cascades of online hate, she said.

Leave a Reply

Your email address will not be published. Required fields are marked *