You might want to think twice before cheating in front of, if not being a jerk to fellow Counter-Strike: Global Offensive players now. For all you know, they might be an “investigator” capable of banning you.
According to the Counter-Strike blog, Valve is debuting something called “Overwatch”. Although in beta right now, the idea is to let the community regulate itself by “allowing qualified and experienced members of the community… to review reports of disruptive behaviour, determine if those reports are valid, and apply temporary bans if appropriate.” People with that power are called investigators, and they can watch a suspect’s replays to determine their fate. Multiple people will review the same suspect, it sounds like, and their judgments must collectively agree an offence has occurred.
Watch this video by WatchSpade if you’d like to see the program in action:
How investigators are picked is unclear, although Valve notes that they look at competitive wins, account age, hours played, Skill Group, and low report counts. I’m guessing that means skilled players with a good reputation are more likely to be picked. Since the program is at an early stage at the moment, Valve says “the system will issue fewer cases and the results will be reviewed and analysed before any bans go into effect.”
It wll be curious though to see if a community-run regulation system results in a better playing environment in Counter-Strike than a purely developer-run one.
[via PC Gamer]
Comments
13 responses to “Valve Has Crowdsourced Banning Cheaters In Counter-Strike”
Honestly not sure I like this idea. Too easy to abuse the system.
I don’t think they’d ban or punish people on the strength of the community judgement alone. More likely they’d review cases that have been flagged by the community and then decide the best course of action from there.
Indeed, just saying Im not sure I like it, not condemning it outright, I can see the benefits, taking workloads off Valve employees etc.
It’s not really any different to how they handle DotA2 bad players. And that seems to work just fine.
I think it’s a pretty good idea. Putting the onus back onto the community to solve these issues will improve the community as a whole.
That said, I am getting a mad police state vibe from the whole ‘not knowing if another player is an investigator’ thing.
Isn’t this just like an admin in a dedicated server? (like in 1,6, source, etc).
I think it’s generally a good idea, so long as the investigators are screened properly, I reckon most long-serving Counter Strike community member would take the role seriously. If it’s also required to be a unanimous decision amongst a panel of moderators, well … I reckon there’s a good chance of it being a success.
This could do wonders if this ran in DayZ
I think the small number of abuses will counter the HUGE number of proper cases handled swiftly and powerfully
This won’t change much when there’s time involved to review and they’re hacking with disposable accounts. It just adds an extra layer of security so I don’t mind, just a bit redundant to be working on this from my view.
The better approach would be to preach religion to kids and tell them if they hack in CS they won’t get internet access in heaven. Use religious scare tactics for the next generation and build a society in fear of losing connection in the after life.
Gosh I’m a Super Moderator on a Minecraft Server and I do this an awful lot.
I think this is a great idea. I don’t know about today but back in the day pubs were always filled with spawn tkers and spawn flashers but never anyone around to ban em.
It seems sort of like the League of Legends method. During games you can report players for a variety of things (usually chat log related) and after a cycle (not sure how long) they scoop up the top 10% of reported players based on number of reports. They then have a tribunal which anyone can register for and they get fed chat logs and comment information for a game with no actual player information, they can then vote to punish or not to.
But after all that, Riot still have the end decision on what to do with them. It is a good system that basically cuts down the amount of reports they have to look at, and the majority they get will be punishable and they can still be flexible.
We did something very similar to this when I was the head gameop at Wireplay and OzGamers back in around 1999-ish. We wrote a custom system called Tarantula that allowed anyone to upload demos from Half-Life based games (TFC, CS, Ricochet, etc) of suspected cheats and the system then looked for a consensus of opinions from reviewers. If the criteria was met (minimum number of reviews, then a minimum percentage threshold for consensus) it would automatically ban their Steam ID across the entire network.
But if you’re a player with good stats, you’re more likely to be a cheater lol, or a red-haired kid with adhd.