Managing Hate Speech in Gaming Media

Hate speech no longer affects a closed audience - its toxicity is spilling to the masses, causing implications for game companies’ revenue and reputation.

Written by Ina Liukkonen, Linguist and Project Manager at BrandBastion

 

Key highlights

1-1      3-1     2-1


 

Hate speech and gaming

In 2014, a controversial and misogynistic harassment campaign took place, targeting several women in the video game industry, including game developers Zoë Quinn and Brianna Wu. Dubbed #gamergate, the hashtag went viral and the toxicity surrounding the controversy entered the mainstream consciousness for the first time.

Gamergate was seen as a pushback against the increasing influence of feminism on video game culture, and racked up supporters who were involved in doxxing (researching and broadcasting personally identifiable information about an individual) hacking of the targets’ social accounts, and numerous death and rape threats.

Since then, a lot has been discussed around Gamergate, as well as around the phenomenon of toxicity in video game communities. Lately, multiple big game companies have made a respectable effort to curb the negative behaviour within their in-game communication channels.

And not a moment too soon.

 


"Just turn off voice chat like the rest of us."


 

Before we continue, let's clarify a few things: 

No, not all gamers are the problem.

Yes, many communities are perfectly civil and pleasant environments.

Yes, certain aggressive tones of communication can be expected in a competitive environment.

Yes, individuals are ultimately responsible for their own actions.

 

However, it is an undeniable fact that hate speech, doxxing, flaming, trolling, and generally socially unacceptable communication has become so ingrained in many game communities, that most users take it as a given, accepting it with an unhealthy dose of apathy. Titles such as League of Legends, Overwatch, Call of Duty or Dota 2 are almost synonymous with bad behavior and exceedingly unpleasant experiences with fellow players.

 


"Welcome to the Internet."


 

This is a truly troubling situation, not just for the users involved, but also for game companies. By not taking serious action on hate speech, be it in-game communication or social media commentary on their pages, companies are allowing an environment of systematic abuse and discrimination to flourish.

Worse, in an absence of clear standards for acceptable communication, companies can make a significant portion of their community complicit to the toxicity. It takes a lot for an individual to fight a social issue, especially if they can expect these actions to lead to semi-systematic retaliation (Moor et al., 2010; Cote, 2017).

 


"... Cut your wrists"

"Your momma is dead, stop trying to **** her"

"You are mentally ill terrorist"

"Send nudes"

"You could bomb the office, that what you ***** good at"

"***ing ***got"

"Bunch of simple minded brown *****..."


 

These repercussions are not limited to the gaming community alone. In today’s hyper-connected world, the under- and overtones of game communities can be a powerful factor in shaping the general communication space, be it on social media, forums, or any public community on the Internet. Without systematic action against inappropriate content, these communities are essentially accepting or condoning hate speech.

 

Why hate speech is a unique challenge for the game industry

Hate speech may be everywhere, but the gaming community has lexicalized it.

At BrandBastion, we work a lot with harmful comments, including hate speech. In our definition, hate speech is aggressive communication targeted against a person or a group based on their ethnicity or identity. It directly or indirectly conveys discrimination, inferiority or victimizes the person.

Based on this definition, we have to acknowledge that hate speech currently is an issue with gaming communities and social media pages associated with these communities. It happens; it happens frequently; and most concerningly, it happens publicly.

While it is by no means only gaming communities that suffer from this culture, what makes them unique is the frequency and lexicalized nature of the hate speech. Additionally, it combines itself seamlessly with common personal attacks against other gamers. So not only are you an “awful casual” but also a “f’n faggot”, regardless of identity or orientation, who should “get off ranked games”. This is a behavior characteristic to the toxic game communities, and unfortunately all game communities run the risk of turning toxic. And I can assure you, this toxicity does not stop with the in-game chat window. 

 

As always, the target for this behavior is the "other". In the mind of every gamer who has ever engaged in toxic behavir,
I am better than the "other".
 
In such toxic environments, outrageous and offensive comments are the norm, as is trolling. However, most players, if asked, would not necessarily describe themselves as having extreme political views, nor as being discriminatory. As a matter of fact, most probably would not consider their behavior hateful, because “they do not actually really mean it” (Hardaker, 2010; Craker and March, 2016).
 
 
They only communicate this way to vent, because it is expected, or because it is "the Internet". "Welcome to the Internet", where any and all reactions are scrutinised, and the joke is on the person who gets offended.
 
 
“...most probably would not consider their behavior hateful, because “they do not actually really mean it. They only communicate this way to vent, because it is expected, or because it is "the Internet".”   
 

A smaller portion of the user base is also likely using this acceptably aggressive environment and its unspoken expectations as an excuse to air their views on ethnic groups, sexual minorities, or political parties. Their motivations differ from the first group mentioned, but it is very difficult to tell them apart from the majority based on the messages alone.

Yet regardless of their motivations, both groups are equally responsible for perpetuating and enabling hate speech in gaming communities. As always, the target for this behavior is the "other". The "other" is not only a familiar concept in sociology and linguistics, but it is an apt placeholder for anything: ethnicity, gender, religion, gamer ranking, weight, or hair color… anything the toxic members of community decide is appropriate at the time. Most importantly, the "other" is not me or us,  the "other" is the opposite of me. And in the mind of every gamer who has ever engaged in toxic behavior, I am better than the "other".

Unfortunately, competitive gaming environments are especially fertile ground for "us vs them" and "me vs them" mindsets, as this is very often an integral part of the game mechanics. Combined with the stress of competitive gaming, a door is opened to abusive communication.
 
 

 
"The problem with hate speech is this: you can't unsee it afterwards."
 

 

As problematic as toxicity in in-game communication is, there has been good effort from companies to curb this behavior. Perhaps best known are the feedback-based solutions from Riot Games, and the community involvement from Blizzard in taking down negative influencers. Feedback and giving the opportunity for a user to correct his or her behavior is an excellent strategy within the game setting, and there is lot of good in enabling the community to self-monitor.

 

The challenge of moderating hate speech on social media

 On social media, you can’t just “turn the audio off”.

While Riot and Blizzard’s steps are an encouraging sign, it is unclear whether these methods extend enough beyond the in-game channels, to the social media platforms. Based on our experience, hate speech on social media is often motivated by very different factors than the abusive communication in-game. 

 

“The toxicity manifests as generalised hate speech, and is rarely an emotional knee-jerk reaction to stress factors over one’s performance.”

 

Within the in-game communication channels, much of the toxic communication is spurred by reactions to the game events, gamer performance, and other competitive features. On social media channels, this is not the case. The toxicity manifests as generalized hate speech, and is rarely an emotional knee-jerk reaction to stress factors. After all, there rarely is anything on social media channels that would compare to the in-game situation and create stress over one's performance.

When such hateful comments appear on social media, it is because these users either 1) do not understand they are engaging in hate speech, or 2) if they do, they do not wish to stop. In either case, they have no incentive to modify their behavior. This renders feedback solutions less effective. The problematic comments are either genuine hatred, or negative and hateful “placeholder” communication, filled with offensive memes and catchphrases the users have come across online, posted "just because".

Post a comment technical issue? Get greeted with "KYS".  Ask for localization in next patch? Uniformly met by profane comments about your inferior nation and crude remarks about people living there. And as mentioned earlier, heaven forbid someone actually engages with these users or publicly complains about the hateful remarks. In most situations, the user will be targeted by multiple hateful and aggressive comments, with varying degrees of personal insults. The user has made the cardinal mistake of getting offended, and others jump on the chance to dish out punishment for this infraction. 

Hate speech on social media is also immediately visible to a large group of people, and apart from blocking each individual user on Facebook, Instagram, or Twitter, or unfollowing an entire page, there is not a lot an individual user can do to avoid this. On social media, you can't just "turn the audio off", or choose to play with only people you know. What you see stays with you. Invited or not.

 


"It is like entering a racist perverted sex chat room."


 

On these public channels, the hate speech often comes in two familiar varieties: discrimination based on ethnicity, and discrimination and harassment based on gender. As always, on these channels Indians should be banned, all Muslims are terrorists, and women belong in the kitchen, but only after sending nudes.

Additionally, many announcements of major patches and updates are met with threats of physical violence, ranging from ludicrous to chilling. If this is not hateful enough, there is always weight and religion to target, for users who wish to expand beyond the top two hits.

 

Why hate speech must be dealt with systematically

 Toxicity is a self-reinforcing cycle if it is not stopped systematically.

While the bulk of toxic comments are not likely to result in actual physical action and harm, the general picture is worrying at best. There are plenty of sincere comments, dialogue, and social bonding taking place in the community, but these community-maintaining activities are constantly getting drowned by the hate speech, personal attacks, and other types of negative communication. Even people who do not engage in or support hate speech and toxic behavior accept the reality that it will inevitably take place.

This in turn undoubtedly affects not only the user experience, but also the acquisition of new players and retention of old ones. As noted by research on the topic, within the market of video games "the competition is fierce and player retention is key". Anything that affects the experience adversely may cause a gamer to"[...]leave for another game operator or even trigger through their social ties departures of large groups of players" (Kuipers et al., 2016).

In toxic environments, ultimately the people who do not want to be complicit to this behavior leave, take their friends, and the community is left with the users who either engage in toxic communication and hate speech, or are tolerant of it. This in turn increases the concentration of hate speech within the community. Newcomers are unlikely to have a good experience in this environment, or the potential players may be put off altogether and never pick up the game. 

Thus a self-reinforcing cycle of toxicity is established. This cycle does not only hurt users as they are exposed to hateful comments, but also game companies’ revenue as gamers leave and new ones do not sign up.

 


"They can’t do much if you don’t report people."


 

What game companies and system moderators can do to prevent hate speech

 While we have dedicated a significant portion of this article to discuss game communities and the toxic environments they can turn into on general level, there is also more to be said about how to combat this situation.  

  1. Demand accountability and cut the cycle short. Prevent it from ever coming to that point. The goal is to never provide a negative social environment for anonymous or semi-anonymous users, which might cause them to adapt their communication strategies to spew hatred. Providing an environment where users feel protected by anonymity and a lack of accountability tends to bring out the worst of humanity. The immediate solution is to cut the hateful cycle short, and actively monitor, penalize, and remove hate speech and harassing messages. Fortunately, many companies seem to have taken this solution to heart. 
  1. Use community feedback tools. As mentioned earlier, in-game tools can be extremely effective in curbing unwanted behavior, as long as they are implemented correctly and become natural extension of the gamer toolset. However, it should be kept in mind that these tools rely on community efforts, and eventually a moderator who will look into the issue and make a decision. The process is thus slow, but the approach works well enough in environments where the user has the option to act, and the messages are contained within game session and group. It does not make them any less heinous or awful for the user, but it does limit the scope of potential damage.
  1. Actively manage public communities. The challenge is that the messages on Facebook pages and Instagram accounts are visible in perpetuity, unless someone deletes them, and to potentially millions of people. While users can report inappropriate content that violates community standards directly to Facebook, the sheer amount of content on the platform makes this an inefficient solution.

There is a real need for the game companies to engage in active community management and content moderation on these platforms, as they do within their own game channels. While this can be costly and time-consuming, there are tools such as BrandBastion’s combined AI and human moderation solution, which can help manage this process at a reasonable cost.

Any in-game toxicity can easily expand to social media, and there is always an additional risk that even random users may engage in hate speech on game pages. In other words, while the in-game communication has a risk to be toxic and result in human misery and gamers leaving, on outside channels this risk is even greater.

There should be a strong incentive for gaming companies to self-regulate and self-monitor their public pages, instead of just leaving it to social media platforms to handle. Unhappy users will ultimately impact a game’s bottom line. As of now, many companies do this, but it should be an industry standard. As a comment originally posted on a PC Gamer article on this topic illustrates, many gamers are tired with the rampant hatred, and negative communication, but lack the tools to fight it alone:

 


"This is long overdue but I am happy (very happy) that the first steps are being taken now. Every influential game developer and publisher should join this and promote the games we play as competitive but non-toxic experiences instead of worrying that if they speak out they will become the target of a mass trolling campaign."


 

As someone who deals with thousands of hateful comments on a daily basis as part of my job, I couldn't agree more. And if there is a backlash and a trolling campaign for doing the right thing, well, we have few million comments' worth of experience in helping with that.

 

  

References:

 Ina Liukkonen

About Ina Liukkonen

Ina is a linguist and project manager at BrandBastion, specializing in linguistic analysis, social media corpora, and marrying together human intuition and machine definitions. She holds a Master's Degree in English Philology from the University of Helsinki with minors in Cognitive Science, and Leadership and Management. Currently, she is continuing on her journey from natural languages to programming languages, while helping BrandBastion manage engagement and understand continuously evolving language and communication on social media.

Subscribe to Email Updates

You might be interested in