Former StarCraft Pros Still Don’t Think Google’s AI Could Beat Them

Former StarCraft Pros Still Don’t Think Google’s AI Could Beat Them

Lee Sedol didn’t think it would be a challenge. 5-0. 4-1, at a stretch. That was the expectations of the South Korean Go professional heading into his five game, US$1 million exhibition match against Google’s AlphaGo, the DeepMind-powered agent of board game destruction.

The score’s currently 3-1 and not in the South Korean’s favour, and it took a fairly gargantuan comeback to get the score to that. The 18-time world Go champion even said he wouldn’t trade the win for anything in the world. But do you think that has have stopped StarCraft players from being cocky over their potential man vs machine match? Of course not.

Only days after recently retired StarCraft legend Lee “Flash” Young Ho posited that StarCraft might be a step too far for Google’s creation, another former professional has come out swinging in much stronger terms.

Lim “BoxeR” Yo-Hwan, one of the most enduring legends of StarCraft and esports since he started playing professionally in 2001, has told Yonhap News that he would gladly accept a chance to face off against AlphaGo in the future. Because there’s no way AlphaGo could win.

“It would be a mistake to think artificial intelligence could beat humans in StarCraft,” an English translation of the article provided by a former TeamLiquid editor in chief reads.

“StarCraft is a game where situational strategy is far more important than in [Go], so it’s an area where AI cannot catch up.” Lim, whose popularity grew to the point where a DVD compilation of his greatest games was released in South Korea, even went as far to say that AlphaGo couldn’t reach the level of StarCraft pros.

It seems more bluster than anything, given the public’s current stance on AlphaGo’s capabilities. On the English version of their website, Yonhap News complained in an editorial about the AI’s raw computational power and suggested that the number of CPUs and GPUs it can access in the cloud should be reduced.

“No matter how much hype there may be, it wouldn’t bury the fact that the ongoing battle pitting Korea’s world go champion Lee Se-dol against Google’s AlphaGo is not a man-vs.-machine fight but a lopsided match pitting one person against an army of super-smart people armed with unfathomable computing power provided by the global multinational, Google,” the news agency argued.

Many StarCraft observers have already suggested that AlphaGo is imposed with a limit its maximum actions per minute, as people have pointed out — particularly in Brood War — the immense advantage an AI would have being able to micromanage units without missing opportunities to keep its economy rolling.

That doesn’t seem to have stopped Lim, and Lee before him, from beating their chests publicly. That’s part of their job as former pros and serving ambassadors of the game. On top of that, they do have a point. But it surely won’t be too long before AlphaGo turns its scorched earth policy towards opponents to the world of StarCraft, and its capacity to learn undoubtedly should have the RTS pros concerned.

[Yonhap News, TeamLiquid]


  • The debate is really whether or not the AI can beat a human while playing by human rules. We already know you can make an AI that can do a million actions a minute that can pull off demigod micro without slipping up on macro mechanics but that’s really not what it should be about. The AI would need to be restricted to the APM of a normal human (professional) player and so it’ll all come down to decision making and strategy, whether the AI can process what it sees when it scouts you, how it reacts to what you do, etc.

    To that end, it may be a while before the AI can beat a human player in Starcraft. If the AI doesn’t play by human rules though, the entire experiment would be pretty pointless and would just prove a computer can click things faster than a human can.

    • I think in order to beat the AI you’d have to out-think it. You obviously couldn’t react or click faster than it. You’d need to consider the AI’s strategy and take advantage of its lack of ‘creativity’. Easier said than done. The AI programmers, on the other hand, would need to make sure that the AI could take advantage of almost perfect micro to split up its attacks into more than what a human player could do, or cope with. Instead of facing a couple of army groups, each unit would be its own army, attacking, manoeuvring and retreating independently. That’s the real advantage of an AI, as you say: it can click faster.

  • Not buying this for a second. Even with restricted APM, Deep Mind would absolutely thrash Starcraft players.

    • That, or all the players would be in mysterious accidents on the way to the game… All caused by some form of internet accessible technology ‘malfunctioning’ momentarily.

      Deep Mind gotta protect that rep.

    • Why do you say that? Starcraft isn’t just a case of bigger army > smaller army, it involves planning, strategy and bluffing. You need to scout out what your opponent is doing and decide how best to counter that while also hiding what you’re planning from their scouts (or better yet, pretending to go one way while planning another). But then your opponent could’ve been bluffing too, so you need to be prepared for that.

      It’s like chess crossed with a bit of poker because you’re always showing each other what you’re next move is going to be, unless that’s just a lie. AI don’t generally work as well as humans with that kind of logic because it just reads as RNG or probabilities rather than reading the person.

      • The neural networks developed for these types of AI have a memory, so they aren’t making decisions context free. It’s not out of the question that they could identify opponent strategies and play accordingly.

      • What @jamesh said. AIs are pretty good at opponent modelling. Arguably better than humans, since they’ll never forget that one trick you pulled three years ago in a situation like this, and are able to work out that *this many* units, from you, is a real attack, but *that many* is just a feint. AIs are also fine with incomplete information, bluffing, and planning. You say Starcraft is like chess with a bit of poker – well, AIs are already far better than humans at the former, and not far behind on the latter.

        • Yeah fair enough, AlphaGo and similar AI are probably at that level already. I was talking more generally that I wouldn’t have called it a foregone thing that they would ‘absolutely thrash Starcraft players’ with restricted (human-level) APM due to the nature of the game but it’s definately a possibility.

          • I probably come from my own set of biases – as an ex-AI researcher and Go fanatic, I was convinced it was going to be another decade before computers worked out how to play Go. I was still convinced until half-way through Wednesday’s game. If computers can play Go (which requires behaviours that look a lot like intuition and creativity) I’m willing to believe they can play anything.

  • We all know the real reason why we can’t let it happen. If the AI learns all the tricks of Starcraft, we’ll have noone left to protect us when the AI uprising begins.

    Seriously though: it took many years before AI graduated from chess to go. It will be a while before it graduates to the multilayered complexity of Starcraft. I am pretty sure Demis Hassabis has already acknowledged that.

  • AlphaGo in particular probably wouldn’t do particularly well at Starcraft, since it only knows about Go.

    I wouldn’t be surprised if an AI that could beat top Starcraft players is achievable today though, even if limited to human reaction time.

    • I think the whole point in playing against AI is that they shouldn’t be restricted to human ‘clicking speed’. The weakness of the AI is that they have no ‘intuition’ and are less adept at situational bluffing. A human would really have to out-strategise an AI opponent to make up for the slower reaction and clicking speed. I’d agree with your assessment that a Deep Blue for Starcraft shouldn’t be particularly difficult to construct. Starcraft strategy, whilst broader than chess strategy due to the types of units and terrain, is not that much deeper. The only depth is in the economy I think. It’d be fun to get some AI programmers and some retired Starcraft pros together to try and create the ultimate Starcraft AI.

      • Why would you expect the AI not to have intuition or be able to bluff? In both cases, you’re essentially talking about recognising patterns and acting on that information. That’s easily within the realm of current research.

        The reason for restricting the AI to human reaction time is to level the playing field so that if the AI wins it is due to better strategy rather than by being able to manipulate units in ways a human couldn’t. The idea is for the AI to act as a replacement for a human player rather than as a replacement for the game’s current enemy computer AI.

        • Good points. I saw it rather as a man vs machine head-to-head, with the player using human advantages and machine using machine advantages. Knowing that the AI works on patterns and probabilities, the human might mask true strategy behind a superficial ‘fool the AI’ strategy, that looks enough like an established pattern to trigger known responses in the AI. I find the concept of trying to ‘outsmart’ an AI fascinating.

  • AI’s are YEARS off being able to beat the best SC players. Really not even close at the moment. The game is too complex.

Show more comments

Comments are closed.

Log in to comment on this story!