In the global fight against toxic and hateful content online, one front merits attention and reinforcement: multiplayer online games, which prove to be a fertile ground for hateful and extremist ideologies.
Nearly 17 years ago in 2006, the Global Islamic Media Front released a game called “Quest for Bush”. This was a first-person shooter video game in which the player took on the role of a jihadi that sought to assassinate George W. Bush. Extremism via online games has come a long way since then. Today we see a spectrum of extremist content and communications over online games. We still see dedicated games promoting extremist scenarios produced by both jihadi and far-right groups. Individuals who are already sympathetic to extremist ideas may turn to such games like “Jihad Simulator” by ISIS or the Far-Right game called “Jesus Strikes Back: Judgement Day” – in which players carry out active shooter attacks while role-playing a character based on the terrorist who attacked the Mosque in Christchurch.
However, we need to recognise that hate speech and grooming towards extremism via gaming platform-based communications can happen in almost ANY popular multiplayer online game. We also need to recognise that vulgar language and the use of derogatory hate speech has almost become commonplace in many gaming platforms with many (especially women) experiencing bullying or abuse during gameplay. In some instances, this happens when tensions rise within the group playing and some members resort to derogatory name-calling over race, religion, or gender characteristics. This form of bullying gets reinforced when gaming bullies get praised by similarly bigoted players for their actions and they collectively mob the victim. In multiplayer game creation systems, players can create racist scenarios aimed at hurting the target of their hate and playing out all forms of racist fantasies.
Then we have outwardly violent games that some extremists have gravitated to as it simulates actual combat and warfare. Some of these games like ARMA 3 and SQUAD are so sophisticated that they are akin to military simulators in which jihadi sympathisers can practice actual attack scenarios such as vehicle-borne suicide attacks. We have seen examples of terrorist scenarios enacted on these games to the point that such games are used to create extremist propaganda videos. It does not help that both young impressionable users and those who claim to be actual war veterans gather on such platforms.
Plenty of important developments concerning content moderation exist on both sides of the Atlantic. In the U.S., the Supreme Court weighs in on the law which protects online platforms from legal liability over user-generated content. In Europe, the Digital Services Act – not without its own challenges – will impose several responsibilities for digital platforms in the hope of creating a safer online space.
However, the debates on content moderation and the sights of regulators seem to largely focus on social media platforms and not multiplayer online games, where user-generated content is an after-thought – for designers of these games and regulators alike. On some of these gaming platforms, hateful and extremist ideologies are taking root, often, in plain sight. In a recent survey, conducted by an American non-profit, ‘Take This’, people were asked how much they were exposed to hate speech while playing games online. The results speak for themselves: most respondents said that they were exposed to racism, misogyny, white nationalism and other forms of hateful and extremist ideologies.
With the ubiquity of smartphones, multiplayer online games have become a part of the daily reality of children across the world. As much as they can be a creative outlet and even teach positive traits such as leadership and how to perform in a team, these games and platforms expose teens to pockets of unsupervised virtual spaces, where we can observe a wide spectrum of how hate and extremism gets gamified
In Singapore, we have recently come very close to bearing the physical brunt of extremism via gamification, when the youngest sympathisers of the “Islamic State” to be ever known in the country were arrested. These teenagers had self-radicalised online via ISIS-themed online gaming platforms and interactions with other ISIS sympathisers on these platforms. One of them created ISIS propaganda videos with ISIS Nasheeds (religious songs) using Roblox game footage. This individual used virtual settings that created scenarios that simulated the Marawi Siege by ISIS groups in the Philippines and other ISIS battles in Syria.
The knee-jerk way to respond when such cases come to light is to blame the platforms which are hosting these games. Such an approach can be short-sighted: several platforms are already enhancing online safety, while others – successfully find dubious ways to avoid responsibility. Either way, while platforms do have a certain degree of accountability, passing the buck to them will not necessarily put an end to hate speech and extremism in gaming. Instead, we need a holistic approach to the global threat of hate and extremism being gamified online.
As a first step, policymakers and regulators across the world need to better understand the unique characteristics of online games and in particular, the subculture of gamers. A majority of gaming platforms have live chat functions that offer a platform for youth to connect. This enables players to form groups of like-minded individuals that form an echo chamber of hateful and extremist beliefs. According to a study by an American NGO, the Anti-Defamation League, online gaming might also normalize extreme views: the more you play in hateful and extremist environments, the more you are likely to ignore such behaviour. Such behaviour gets reinforced when individuals are ‘rewarded’ when they gain popularity or notoriety for negative behaviour in these games. Additionally, many of these multiplayer online platforms are not well regulated and encrypted gaming communications take place over private chatrooms, which makes it particularly complex to track hateful and extremist content.
Second, online games can serve as a springboard for children being lured into joining other encrypted platforms, where they can fall prey to even more extreme players and their views. Being at an impressionable age, they are susceptible to being recruited by senior or more experienced players to join actual far-right or jihadi extremist groups.
Finally, former victims of online extremist grooming and online bystanders witnessing such behaviour over gaming platforms should also be part of the policy-making equation. We need to create avenues where they can come forward to talk about their experiences to create awareness or simply to call out extremist behaviour without fear. The best defence against hate/extremist content and behaviour in online games would be to encourage, empower and build the capacity of the legitimate gaming and e-sports community to counter such occurrences wherever they occur. They can create an environment that draws the line at extremism by engaging in collective ‘mobbing’ actions akin to ‘social firewalls’ that chastise, report or kick out anyone engaging in hate/extremism content and behaviour. One way of doing this would be to reinforce the message that gamers are not being targeted but the actions of a few hateful gamers can hurt the entire gaming community. Finding key influencers in the gaming community is key to this.
The complex space of online gaming creates numerous challenges for policymakers worldwide. Despite this, there are some ongoing initiatives that are designed to address the issue of hate and extremism online at the tactical, strategic, and grass-root levels. Each of those or, ideally, a combination of them can serve as a model to be replicated by policymakers worldwide.
At the tactical level, Danish police have created a special online unit called ‘Politiets Online Patrulje’ (Police Online Patrol), which plays online games together with young players to prevent inappropriate behaviour and crime, and to facilitate timely intervention in the event of any offences.
On a strategic level, the Secure Communities Forum, a global initiative established by the United Arab Emirates’ Ministry of Interior, is gathering a broad range of stakeholders from leading NGOs and non-profits, representatives from regulatory bodies, law enforcement as well as international gaming associations to develop a response model to extremism online. Based on its leadership in digital innovation, the UAE is working to provide a universal framework to address the issue of online hate and extremism in a holistic manner.
In the U.S., where online content moderation is at the front and centre of public debates on this issue, there are valuable conversations about action at the grass-roots level. For instance, some American researchers at the University of New Hampshire’s Prevention Innovations Research Center and the Tiltfactor Laboratory at Dartmouth College stress the value of “bystander” engagement that can be taught via video games. Similarly, the online gaming community can also encourage “bystander” reporting in the gaming community as an efficient prevention and reporting mechanism.
As online hate knows no borders, there is a strong need for more awareness and cooperation to address the global threat of hate and extremism in online gaming. Policymakers worldwide should take note of various approaches to create meaningful ‘win-win’ partnerships with both the gaming platforms and gaming communities. Only through such partnerships can we advance holistic innovative solutions to the issue of gamification of hate and extremism in the online realm.