Personalized Warnings on Twitter Could Help to Curb Hate Speech – Researchers

A series of well-worded cautions sent to the appropriate accounts could assist to minimize the quantity of hate on Twitter.

This is the conclusion of recent research looking into whether tailored warnings on the platform could reduce hate speech.

The Center for Social Media and Politics at New York University discovered that customised alerts alerting Twitter users to the repercussions of their actions lowered the amount of nasty tweets a week later. According to Mustafa Mikdat Yildirim, the paper’s primary author, additional research is needed, but the experiment provides a “possible road forward for platforms wanting to limit the use of hostile language by users.”

Depict Image

Researchers used the experiment to identify accounts that were at danger of being suspended for violating Twitter’s hate speech policies. They were looking for those who had used at least one term from “hateful language dictionaries” in the previous week, as well as people who followed at least one account that had been suspended recently for using such language.

The researchers then created test accounts with personalities like “hate speech warner,” and used them to tweet warnings at these people. They tried out a few different messages, but they all conveyed the same message: that using hate speech may lead to suspension, and that it had already happened to someone they follow.

One sample message published in the report states, “The user @account you follow was suspended, and I think this was because of harsh language.” “If you continue to use hate speech, you may be temporarily suspended.” In another variant, the account issuing the warning identifies itself as a professional researcher while simultaneously informing the person that they are in danger of being suspended. Yildirim tells Engadget, “We tried to be as credible and persuasive as possible.”

READ ALSO: Twitter: How to Hide Savage Replies

The warnings were shown to be effective, at least in the short term, by the researchers. “Our findings demonstrate that a single warning tweet sent by a user with less than 100 followers can reduce the proportion of tweets with hostile language by up to 10%,” the authors write. Interestingly, they discovered that messages that were “more gently phrased” resulted in even bigger drops, up to 20%. “We attempted to make our message more courteous by basically beginning our warning by stating, ‘well, we recognize your right to free expression, but keep in mind that your hate speech may injure others,'” Yildirim adds.

Yildirim and his co-authors highlight in the report that their test accounts only had about 100 followers each and were not linked to any authoritative body. However, if the same type of warnings came from Twitter itself, or from a non-governmental organization or another organization, the cautions might be even more beneficial. “What we found from this experiment is that the real mechanism at work could be that we actually let these folks know that there’s some account, or some entity, watching and monitoring their conduct,” says Yildirim. “The fact that their hate speech was observed by others may have been the most crucial reason in these people’s decision to reduce their use of hate speech.”

READ ALSO: “F**k Them Kids…We Sent Him Money To Enjoy Himself” – A Twitter User Disagrees With Davido’s Choice To Donate The 250M To Orphans

No Comments Yet

Leave a Reply

Your email address will not be published.