As popular as it is, League of Legends has always had an issue with its community, which to be polite can often be seen as "rude". And while developers Riot are trying some novel approaches to mending this the nice way, that carrot is being joined by a new, heavy stick.
Riot is currently testing a new combination of human review and "machine learning" to identify and punish players exhibiting "extreme cases of toxicity" (listed examples include intentional feeding, racism, death threats and homphobia). It's being tested so they can sort out how many false-positives it returns, but if it works, we can hopefully expect more bans for people who are dragging regular player's experiences into a negative space.
Players could be banned from a period of two weeks up to forever if busted engaging in this kind of talk.
In another break from current policy, Riot will also be tackling the issue of players complaining about their bans. If a player is caught by this new system attacking another player and they debate their innocence, Riot will be publishing chat transcripts so as to be "fully transparent".
Some players have also asked why we've taken such an aggressive stance when we've been focused on reform; well, the key here is that for most players, reform approaches are quite effective. But, for a number of players, reform attempts have been very unsuccessful which forces us to remove some of these players from League entirely.
Go get 'em, Riot.