Many Twitch streamers have YouTube channels, and vice versa. They also have Twitter pages, Instagrams, TikToks, SnapChats, and basically every other kind of account under the sun. For years, Twitch has taken activity on other platforms into account when deciding to suspend or ban streamers, albeit inconsistently — and typically only when an infraction had also happened on Twitch or to another Twitch streamer. Now it’s trying to turn off-platform activity into a cornerstone of its moderation approach, partnering with a third-party law firm to investigate off-platform threats, sexual assault, and other forms of harassment and violence.
Twitch announced its new approach to off-platform misconduct in a blog post today, saying that “we have taken action historically against serious, clear misconduct that took place off service, but until now, we didn’t have an approach that scaled.” That’s where the law firm, which will assist Twitch’s internal team, comes in.
“This partner is an experienced investigations law firm that is dedicated to conducting independent workplace and campus investigations including those related to sexual discrimination or assault,” the company wrote, noting that it has tools that allow it to verify what has and has not occurred on Twitch, but verification on other platforms is much harder to pin down. “This partnership will allow us to more thoroughly investigate and respond to reports of off-service misconduct. We’ve also increased the size of our internal law enforcement response team which is extensively trained to manage sensitive, confidential investigations and partner with law enforcement.”
Twitch also gave examples of the sorts of behaviours that might get streamers unceremoniously ejected, even if they don’t occur on Twitch. These include deadly violence and violent extremism, threats of mass violence, terrorist activities or recruiting, leadership or membership in a known hate group, carrying out or acting as an accomplice to non-consensual sexual activities and/or sexual assault, sexual exploitation of children (including grooming), threatening violence at events, and threats against Twitch staff.
Twitch went on to admit that its list does not include all forms of harassment and abuse. In an email to Kotaku, a Twitch spokesperson added that “we will only act in cases where we have verifiable evidence, and at this time, we are unable to investigate cases that fall outside the behaviours listed in the policy.” Evidence, in this case, might mean “links, screenshots, video of off-Twitch behaviour, interviews, police filings or interactions, that have been verified by our law enforcement response team or our third-party investigators.”
As with other moderation decisions made by internal Twitch teams, only streamers and others directly involved in investigations will be privy to their outcomes. “We understand that this may be frustrating, but we’ve enlisted the support of these third-party investigators to protect the rights, confidentiality, and privacy of all those involved, and to preserve the integrity of these investigations,” the company wrote, also noting that it’s created an email for reports of off-service misconduct — [email protected] — where all information users send will be treated as confidential.
It remains to be seen how this will all work out in an area of moderation where Twitch streamers and viewers have historically clamored for more transparency, not less. That said, this does appear to close a loophole in Twitch’s rules. Last September, for example, Twitch allowed LeafyIsHere, a creator who’d been banned from YouTube after harassment campaigns against streamers like Pokimane, to migrate over to its platform. It then let him stream for a few weeks, despite the obvious risks, until he committed an offence on Twitch as well. Only then did it ban him. The problem with that method is that it gives people who’ve already done harm elsewhere chances to do more. Now, theoretically, Twitch has the tools to prevent similar situations from unfolding.
That said, there are reasons to be sceptical. For one, Twitch has not provided much in the way of specifics about the law firm it has partnered with, and the firm’s particular areas of specialisation — workplace and campus investigations — are systems in which it is notoriously difficult for victims to get justice. There are also red flags in how Twitch has recently chosen to apply its preexisting off-platform rules. Last month, for example, the platform finally banned longtime streamer Ali “Gross Gore” Larsen for inappropriate off-platform behaviour stemming from a “revenge porn” incident, but only after years of second chances following other documented instances of off-platform sexual harassment and assault. It also seems unlikely that this new policy will protect streamers from off-platform mass harassment campaigns like the one Critical Bard endured earlier this year after committing the crime of (gasp) calmly explaining the history that has made Black Lives Matter a necessary movement.
If nothing else, Twitch knows it still has work to do.
“Taking action against misconduct that occurs entirely off our service is a novel approach for both Twitch and the industry at large, but it’s one we believe — and hear from you — is crucial to get right,” the company wrote. “Part of that means being clear with you about the limitations of our policy. At this time, we’re not able to investigate behaviours that occur entirely off Twitch that fall outside these categories. This is an iterative, ongoing process, and as always, our end goal is to build a safer Twitch for everyone.”