Despite arriving too late to prevent what some might call a full-blown crisis, Overwatch‘s anti-toxicity efforts seem to be bearing fruit. The next step in the battle, says game director Jeff Kaplan, might be having computers, not humans, find the bad actors.
“We’ve been experimenting with machine learning,” Kaplan told me during an interview at Blizzard’s offices yesterday. “We’ve been trying to teach our games what toxic language is, which is kinda fun. The thinking there is you don’t have to wait for a report to determine that something’s toxic. Our goal is to get it so you don’t have to wait for a report to happen.”
Even without robots leading the charge, Blizzard is making progress. In a video released in January, Kaplan said that thanks to changes to the game’s reporting and punishment systems, abusive chat went down by 17 per cent. He claimed that people use the report feature 20 per cent more often, too. So basically, Overwatch‘s reporting system is functional now. But that doesn’t necessarily mitigate the problems shared by reporting systems as a whole. They’re slow. People don’t always have faith in them. They can be abused. The next step for the Overwatch team, then, is to start catching bad behaviour before it’s ever reported. Enter machine learning. Already, Kaplan says, the company is teaching its AI non-English languages such as Korean, and the long-term hope is that it can learn what toxic gameplay looks like.
“That’s the next step,” said Kaplan. “Like, do you know when the Mei ice wall went up in the spawn room that somebody was being a jerk?”
Of course, machines are just as fallible as the humans who built them, albeit in different ways. Machine learning can have trouble determining context, especially if you’re just teaching it words and phrases. Profanity-laden trash talk, for example, might cause you to see red and spit fire if it’s coming from a complete stranger, but laugh like an idiot if it’s coming from one of your best friends.
Kaplan said that, at this point, the team has mostly tuned its AI to go after the most egregious examples of toxicity – people who, by Blizzard’s metrics, doubtless deserve punishment. It also isn’t going through DMs between friends or anything like that, and it can be tuned to flag first and only punish on repeat offences.
“With anything involving reporting and player punishments, you want to start with the most extreme cases and then find your way to ease up after that,” he said.
The Overwatch team – which is now part of a Blizzard-wide anti-toxicity initiative as well as a collaboration involving companies such as Riot and Twitch called The Fair Play Alliance – is also examining the other side of the coin: How do you reinforce good behaviour?
“We can start looking toward the future and talking about things like, what’s the positive version of reporting?” said Kaplan. “Reporting is saying ‘Hey, Adrian was really bad and I want to punish him for that,’ but what’s the version where I can say, ‘That Adrian guy was an awesome teammate and I’m so glad I had him’?”
“We’re punishing the bad people, so how do we get people to start thinking about being better citizens within our ecosystem?”
Comments
9 responses to “Blizzard Is Trying To Teach Computers To Spot Overwatch Jerks”
You know what would be nice? A decent matchmaking system that doesn’t keep putting me up against people who snap 180 degrees and headshot you in under a damn second.
Cause automation is flawless right? Like Youtube and their automatic systems that result in billions of false reports and claims each day.
That was my first thought, too. I’ll be interested to see what they come up with to reinforce good behaviour but I don’t think the automated system to spot bad behaviour will be very effective.
Overwatch’s reporting system has always been functional.
The problem is that it’s an automated system.
Nothing will happen to your average toxic player/thrower/troll/smurf until they receive x number of reports, at which point they’ll be silenced or suspended automatically.
They can then appeal the silence/suspension via a support ticket (unless you’re on console, because Blizzard’s GMs claim they cannot overturn silences/suspensions on console; that it’s up to Microsoft/Sony to do that – something Microsoft & Sony deny).
So long as the reporting system remains automated, it will continue to be exploited & abused.
I still remember when they finally (after a year!) added the in game reporting system to consoles.
I patched the game that morning before work but then didn’t play til after work.
When I launched the game for the first time since the update, I was greeted with a message informing me that, due to a large number of reports made against me for abusive chat, I was being silenced for a day.
Pretty impressive that I amassed that many reports when I wasn’t even online!
Blizzard Support was beyond useless in this situation. They kept referring me to the Code of Conduct and informing me they cannot overturn silences on console, completely ignoring the fact that the reports were clearly false as I wasn’t even online at the time they were made.
Wish they spent more time making maps than what is surely an impossible task.
The people that make maps are not the same ones that do the programming.
Not these days, anyway. (Brother used to talk a lot with a bloke who helped code Daggerfall, and he told him plenty of stories about people getting pulled onto different projects – mapping, in his instance – just to get the job done.)
This is the birth of the Omic Crisis