Fighting Online Abuse Isn’t About Trolls, It’s About Community

Fighting Online Abuse Isn’t About Trolls, It’s About Community

If you’ve spent much time online — playing games, talking on social media, using message boards — chances are you’ve gotten some abuse. Someone’s called you a fag, or a dumb bitch, or suggested they might find out where you live and skullf**k you to death.

When that happens, we generally take it as a cost of doing business online. “It’s the Internet,” we say, licking our wounds and trying to focus on other things. “That’s how it goes.”

But that’s not really OK. It’s not a good enough answer, and on some level, we all know it. The Internet is where so many of us live, play, work and socialise. It’s an essential part of our lives, as “real” to many of us as any school or office or city street. Is it really ok that we’ve somehow passively decided, as a community, that this sort of behaviour is par for the course?

In a new feature at Wired, writer Laura Hudson (disclosure: we’re dating) has shared the results of months of research and interviews about online harassment. In it, she focuses not on harrowing stories of victims (believe me, there are plenty), but rather on possible solutions. She argues that long-term solutions won’t be found just by banning trolls or removing the few most toxic members of a community. Rather, it’s about a community deciding what its values and norms are, clearly communicating those norms, and giving community members the tools to help enforce them.

Imagine how it’d be if you walked up to someone at your job and screamed in their face that they were a faggot jew and you were going to track them down at home and kill them. It wouldn’t be ok, right? People would say something, there’d be outcry as soon as you started yelling, you’d get called into an office, reprimanded and possibly fired. That’s because your office community has a set of cultural norms in place that clearly state that it’s not ok to behave that way. Most real-life communities have these sorts of norms. A lot of internet communities, however, have no such established norms, and many users have come to assume that anything goes. So it has been, and so it shall forever be. Welcome to the Internet.

Laura and I have spent many hours over the last several months discussing the article as she worked on it — her interviews, new perspectives, upended assumptions, and so forth. It’s been instructive for me, and I’ve come away with a greatly refined perspective on the issue. Like her, I’m convinced that the best way for communities to improve their level of discourse and curb abuse and harassment is for the people in charge of those communities to establish better, more consistent social norms, and to put in place tools that let community members help enforce them.

League of Legends may be famous for having a toxic community, but the game’s developers at Riot are actually doing some groundbreaking work in their attempts to address and improve things. Among their more successful endeavours is the Tribunal System, which allows a jury of players to vote on offending behaviour and mete out punishments, including bans. Riot has also assembled a team to analyse player behaviour, and brought together staff members with degrees in psychology and neuroscience to help better understand the dynamics at play in League games and in the community at large. It’s working. From Wired:

This process led them to a surprising insight — one that “shaped our entire approach to this problem,” says Jeffrey Lin, Riot’s lead designer of social systems, who spoke about the process at last year’s Game Developers Conference. “If we remove all toxic players from the game, do we solve the player behaviour problem? We don’t.” That is, if you think most online abuse is hurled by a small group of maladapted trolls, you’re wrong. Riot found that persistently negative players were only responsible for roughly 13 per cent of the game’s bad behaviour. The other 87 per cent was coming from players whose presence, most of the time, seemed to be generally inoffensive or even positive. These gamers were lashing out only occasionally, in isolated incidents — but their outbursts often snowballed through the community. Banning the worst trolls wouldn’t be enough to clean up League of Legends, Riot’s player behaviour team realised. Nothing less than community-wide reforms could succeed.

Some of the reforms Riot came up with were small but remarkably effective. Originally, for example, it was a default in the game that opposing teams could chat with each other during play, but this often spiraled into abusive taunting. So in one of its earliest experiments, Riot turned off that chat function but allowed players to turn it on if they wanted. The impact was immediate. A week before the change, players reported that more than 80 per cent of chat between opponents was negative. But a week after switching the default, negative chat had decreased by more than 30 per cent while positive chat increased nearly 35 per cent. The takeaway? Creating a simple hurdle to abusive behaviour makes it much less prevalent.

The team also found that it’s important to enforce the rules in ways that people understand. When Riot’s team started its research, it noticed that the recidivism rate was disturbingly high; in fact, based on number of reports per day, some banned players were actually getting worse after their bans than they were before. At the time, players were informed of their suspension via emails that didn’t explain why the punishment had been meted out. So Riot decided to try a new system that specifically cited the offence. This led to a very different result: Now when banned players returned to the game, their bad behaviour dropped measurably.

Riot’s approaches are fascinating and, as the company is demonstrating by trying various solutions out on their massive userbase, a lot of them actually work. Lin’s GDC talk was great; you can watch a video of the whole thing here, and I really recommend it.

Games, with their closed communities and well-funded community management teams, have a terrific opportunity to blaze trails that (hopefully) might be followed in some ways by larger social networks like Twitter and Facebook, where harassment is still rampant and many users, particularly women, are besieged so constantly and unpredictably that they opt to forgo the service completely. Obviously no two online communities are created equal, and things that work for League of Legends might not work or might even be destructive if implemented someplace like Twitter. But the overall philosophy remains: If services like Twitter and Facebook want to get serious about reducing abuse and harassment on their networks, they need to be investing in solutions as heavily as Riot is.

Here at Kotaku, we’ve had plenty of our own challenges over the years. I’ve worked here almost three years, and during the first year or two I got the sense that readers didn’t always understand Kotaku‘s commenting and community policies. Sometimes we’d ban readers who were abusive or spewed hate-filled language, but other times the space below a post would be festooned with awful garbage and anonymous hate-speech with nary a moderator in sight.

That said, our Editor in Chief Stephen Totilo’s post last year, “A Note About ‘Brutal’ Comments and a Kotaku For Everyone” was actually very much in the spirit of community norm-enforcement that Riot and others advocate.

Rather than just tell Kotaku staffers to continue to unfollow and block abusive commenters, Stephen laid out what Kotaku’s community norms should be, and who this site is for:

We still want readers to feel free to agree or disagree with our articles and say so on the site. We still encourage wit, smart argument and bold opinions. We still welcome debate. We still, as before, will diminish or even block the visibility of comments by those who simply attack Kotaku writers or readers.

Today I am also committing to expanding our discussion moderation to push back against any tide of comments that fail the test of being things that we believe you’d say to the face of the people you’re commenting about. We imagine that any of our more than five million readers per month might disagree with something on our site, and we are confident that any of those five million can find a way to say so while getting over what is still a low bar.

In other words, this is a community, so act like it. Of course, that one post didn’t solve everything — it’s still on us writers to moderate conversation, get rid of spam and abuse, and promote the best discussions. And unlike League of Legends, Kinja doesn’t have a built-in Tribunal-like way for users to police one another, short of responding to and shooting holes in lousy comments — which you guys do often and admirably — though readers can report abusive comments to feedback@kotaku.com, and we encourage you to do so. But I do think that in the wake of Stephen’s article, discourse on our site improved significantly, and that these days it’s better than it’s ever been. We’ve miles to go, of course, but every other internet community beyond a certain size has miles to go with us.

In order to make online spaces safer and more welcoming, community members themselves do need to get involved. But first, the people who own and run those communities need to decide what kind of an environment they want, to clearly articulate what that looks like, and to give users the tools to help make it happen.

An earlier version of Laura’s article started out with a metaphor that I really liked. It went something like this: When we talk about abuse and harassment on the internet, we talk about it the same way we talk about natural disasters. We throw our hands up and say, hey, what can you do? We can’t stop internet abuse any more than we can stop the rain from falling. “And on the Internet,” she wrote, “it’s always raining.”

It doesn’t have to be that way. Yes, there will always be jerks out there. Somewhere, it will always be raining. But we don’t have to just suck it up and weather the storm; together we can build shelter.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


4 responses to “Fighting Online Abuse Isn’t About Trolls, It’s About Community”