Earlier this week Kotaku UK ran an article from Charley Hodson about the sexual harassment she recently faced on Twitch while livestreaming in the site's IRL category. Everyone seems to agree there's a problem, but it's not so easy to find consensus on how platforms like Twitch should be tackling it. Given the focus of many online platforms on creating automated mod tools, we thought we'd run an experiment.
How easy is it for a human to track down Twitch accounts that either focus on, or regularly engage in, verifiable instances of targeted harassment on the platform?
As it turns out, incredibly easy. And that's without any access to Twitch's support accounts or report logs.
We spent 90 minutes looking for harassment and were able to identify and document 25 accounts on Twitch that regularly engage in harassment of other users, either via in-stream chat or by re-streaming people's livestreams and commenting on them. Every one of these accounts is still active at the time of publishing, though we've passed our findings on to Twitch support.
How did we find so many so quickly?
Depressingly, most of these harassment-focused accounts were found by looking at Twitter users who had tweeted at Twitch's official support account about their harassment. These users received no public response, and their harassers remain on Twitch. People were giving Twitch the exact information we used, and within minutes of investigating each case we'd found clear proof of what they were flagging.
Twitch archives the chat on livestreams. If you know when harassment took place, you can find the accounts behind it from the chat. From there, you can check each user's page and further evidence of their behaviour is right in front of you, particularly for those users livestreaming their harassment of others.
In 90 minutes we found users threatening to rape women, users mocking disabilities, users throwing around homophobic and transphobic slurs, users spamming sexually explicit comments at streamers, users threatening to doxx streamers mid-stream, and users harassing the followers of streamers to quit their channel.
Of the 25 accounts we found, seven of them had engaged in livestreaming their harassment of other Twitch users on the IRL section of the site. Of those users, all of them had streamed harassment since the start of August. Most of these channels featured between five and ten videos in the IRL section of the site. It's all still available for the public to watch.
People often discuss human moderation as if it's some impossible task, like the sheer volume of traffic that a service like Twitch handles is too much for even an army to handle. Is that really true? It's not Kotaku UK's job to moderate Twitch, but in 90 minutes we've seemingly done more to look into information sent to the company's support account over the last few weeks than anyone at Twitch itself.
We contacted Twitch earlier today for comment, before publication of the Charleyy Hodson story, and have subsequently followed-up with calls. So far the best we've got back is a brief holding statement promising a comment at a later time.
Twitch does have a report function on its site. It does seem to respond, sometimes, to these reports — though in the case of Ms. Hodson's harasser, the account was suspended and then, bizarrely, re-instated. This all suggests that Twitch needs to start taking harassment in general, and in particular the livestreaming of it, much more seriously.
Responding to users in a timely manner would be a start. We found these harassers easily and without any access to Twitch's own support tools, and weeks later their accounts are still active in every case. Which suggests, frankly, that dealing with this problem is not a priority for the streaming giant.
This post originally appeared on Kotaku UK, bringing you original reporting, game culture and humour from the British isles.