The secret ingredient in AI-powered community moderation? Humans.
by News
on 28th Nov 2019 in

Tackling toxicity in online communities is a demanding, nuanced and sometimes thankless task. As communities around single games grow into tens of millions - or hundreds of millions in some cases - a tiny minority of problematic users can still present a sizeable group, and cause real harm to significantly large audiences.
The challenge in meeting, managing and countering toxicity is two-fold.
Firstly, as a human craft game moderation can be remarkably demanding on community management staff. The thing is, there remains a need for human moderation. Problematic content and toxic behaviour are both nuanced entities. While online abuse can have a very real impact on its victims, there can be cases where playful interactions between real friends can register as problematic. Language is a complex beast, and swearing is all about context. A community management team can and should use keyword tracking technology to pick up on examples of unpleasant or potentially harmful language, but perhaps only as a general guide. Because just as foul language can sometimes simply reveal a playful competitive spirit between close friends, not all online toxicity involves foul language or obviously derogatory keywords.
Unfortunately, the very need for a human observe to understand context means community managers themselves can have to face deeply unpleasant content, and endure aggression from problem users.

A Game of Counter Strike: Global Offensive running on the FaceIt esports tournament platform, which can now deploy AI community management tool
The second part of the challenge is about scale. A modest game with a few hundred users at launch might only take one human moderator, who can get somewhere close to knowing users by name. But what if a game has hundreds of thousands of users; or hundreds of millions? It’s a problem that - famously - even social media giants can’t face off. Moderating tens of millions of users could take tens of thousands of staff. That is not something available to your typical game company.
On paper, automation can solve the challenges of scale and protecting human moderators from abuse and exposure to disturbing content, all while granting users safe gaming spaces. But are AI-powered moderation tools up to the job of applying a nuanced human reading to the myriad different types of interaction that occur across gaming communities?
Moderating esports
Late last week esports tournament platform FaceIt used its stage time at a Google Cloud Customer Innovations series event to provide a significant update on its Minerva AI; a community management tool powered by machine learning. As reported by GamesIndustry.biz, in the tech’s first month running in Counter-Strike: Global Offensive it recognised 4% of chat messages sent within the game’s matches as toxic in nature; a figure that impacts a striking 40% of all matches played. Issuing 90,000 warning’s and 20,000 user bans, Minerva AI inspired a 20% dip in toxic messages.
Now the technology is looking to autonomously moderate words spoken (rather than written) by players, and it is hoped it will equally grow to be capable of identifying in-game player behaviour patterns that point to toxicity.
But what about the human element? FaceIt revealed at the Google gathering that it is to harness the potential of the crowd, asking its players to collectively review decisions made by Minerva AI, helping it refine its understanding of context around human interactions. The approach also sees the wider player community - and not just a moderation team - have a say with regard to the margins for defining problematic behaviour.
That reflects a key trend in how AI comes to serve humanity. Many have argued that ‘artificial intelligence’ should be considered as ‘augmenting intelligence’. AI might best serve – at least in the immediate future – not as a replacement for human intelligence, but as an accompaniment to it; an augmentation of our abilities.
Like many other comparable technologies, Minerva AI makes that a two-way process. By employing its player community to hone the moderation tool’s abilities, it arguably lets human thinking augment the platform’s capabilities.
A moderation Ally
Elsewhere UK technology outfit SpiritAI is continuing to develop technologies like Ally, which uses a combination of AI, machine learning, natural language processing (NLP) and natural language understanding (NLU) to understand player message communication at a conversational level. The idea is that it can understand and adapt to context. The technology can issue automated warnings, rule reminders and even bans; but it is designed as a processing and filtering tool that can push potentially problem to human moderators, helping them to both ‘intervene and disrupt the behavioural patterns’ that emerge within a given game.
And that is likely the way forward for AI as a tool to empower moderation. AI is coming, but it won’t be taking the jobs of community moderators any time soon. And AI needs us as much as we need AI. AI is one of the key solution available in the quest to tack the problem of bigger communities meaning far more to comb through to zero in on toxicity. But it will only work in collaboration with human moderators, player input and alignment with existing moderation technology.
AudienceDeveloperesportsGamingPlayersPublisherTechnologyUncategorized
Follow TheGamingEconomy