StarCraft Could Be The Next Game AI Destroys Us At

StarCraft Could Be The Next Game AI Destroys Us At

My experience with StarCraft was probably the stupidest possible: I really liked the books. Sure, I played the games, but mostly I played single-player because I was not very good, and enjoyed the story. But for some reason, it was Sarah Kerrigan’s tale told through the novelisation that resonated with me the most.

Image: Mike Prosser/Flickr/Screenshot

Anyway, I was, and still am, very dumb. Others actually enjoy playing the complex real-time strategy game (against human players) that puts alien races in conflict with one another. But in the future, players may have to go up against AI opponents, too. That’s because Facebook is now releasing an enormous AI-training data set, consisting of over 65,000 StarCraft replays broken into 1.5 billion frames, equalling 365GB of data, the largest set of StarCraft replays yet by a factor of 10. Meanwhile, Google’s DeepMind and Blizzard are releasing tools to train AI on its own large Starcraft 2 data set.

Facebook researchers published a paper on their data set Monday on the arXiv preprint server, explaining that StarCraft is a complex game to learn, for which expert playthroughs exist (since there are human experts). There’s a lot of data, and a lot of scenarios to train a neural network on. AI can potentially use this data to learn how to classifying different gameplay strategies, improve gameplay without a reward, predict the future of games, or learn how to play given only a demonstration and no instructions.

The researchers don’t specify whether the data is from StarCraft, StarCraft: Brood War or StarCraft 2. But seemingly coincidentally today, Google’s DeepMind and the game’s creator Blizzard released tools to train artificial intelligence on StarCraft 2, as reported by The Verge, after announcing a partnership last year. Oriol Vinyals, a Google DeepMind researcher, explained to them that StarCraft‘s “fog of war”, which hides parts of the map you haven’t explored, requires the computer remember the enemy’s locations and continue to scout as players must do when they play.

I’ve reached out to Facebook for comment, given the fact that Google and Blizzard released tools and a data set a day later, as well as to see whether and how the two data sets are related. I had not heard back at time of writing.

It might be soon that an AI kicks your arse at the online multiplayer games you love. But Byun Hyun Woo, world champion StarCraft 2 player, was sceptical, as reported by MIT Tech Review earlier this year. He told them: “I don’t think AI can beat [a professional player], at least not in my lifetime.” But AI has proved human players wrong about their assumptions at other games, such as Go.

And on top of that, folks at the IT University of Copenhagen, Denmark, are already using what they have learned from Alpha Go to train AI on 630,000 moves from 2000 StarCraft games, reports New Scientist.

Anyway, you should read the books because they’re very good.

[arXiv, The Verge, Tech Review]


  • I think the main issue is whether the AI is going to be limited in some way with respect to its own APM, because I think eventually it will learn ‘how’ to play the game, and if it is allowed to make near-infinite moves at the same time, then yes it will start to dominate human players.

    Part of the difficulty of SC will be that human players only have two hands and one brain. If a computer can discern what type of army it needs to fight you and win or neutralise you in a specific location, while simultaneously taxing a human opponent’s ability to multitask perfectly in different locations, the human will just never keep up.

    In any event, I suspect its dominance over SC is much further away than Go because of the extreme number of variables that real-time maneuvering and the fog of war produce.

      • Don’t just believe any article comment. None of these journalists actually looks into the details.

        The starcraft API allows the AI to full on cheat. For example there’s a SC2 AI that micros 100 zerglings to win vs 20 siege tanks, but it pulls the siege tank target from the engine before it even fires, allowing it to move units away to avoid the splash. Or that SC1 AI that can calculate how many mutalisks are needed to overwhelm defensive emplacements, but it relies on the Broodwars API leaking FOW information about their exact location and turret range, allowing it to build up the required amount, circle around to attack from the best spot, all without losing a single muta from scouting.

    • Actually the default starcraft ai can already make simultaneous moves from day 1 when it was released in 1998 (and of course unrestricted by APM). But it’s not hard for even noobs to overcome.

      Whether the ai can combine it with absolute perfect microing and strategies is another story.

      • Yes, it -can-, but it doesn’t. Go play the default AI and watch it not micro units at all as just one example of how it doesn’t make use of its ‘potential’ APM.

        My point is, if the AI can figure out HOW to micro, macro, conduct drops at one location while defending another, scout etc AND retains this unlimited APM, then a human has no chance.

        • Actually it does. Easiest way to check is to use the editor and change the campaign ai to custom ai. You’ll see them sending units from multiple bases at the SAME time. While yes, it does not micro units in combat, that’s only because it was not programmed to.

          • I don’t know why you’re making this point. You’re arguing semantics. Of course it moves multiple units at the exact same time. It doesn’t do so in any meaningful way. So if that was your point, sure. But that’s completely irrelevant to the point I’m making and made in my first post.

          • It’s because you focus on APM as if that’s primary factor. I see far too many player just mindlessly clicking for no reason just to get their APM up.

            For evaluating AI, its also completely meaningless.

          • It’s meaningful and a primary factor when it comes to competition vs a human opponent when competitive starcraft tells us that the best players only execute something in the vicinity of 300-350 APM in games. These guys are not mindlessly clicking to boost their APM. THey are effectively using it.

            So, for the third time, if a computer learns the best way to play the game and then can effectively employ everything it learnt by doing everything beneficial at exactly the same time then it will out-multitask the human opponent with ease. e.g. perfect muta stack micro of 4 control groups of muta+1 overlord simultaneously destroying your base, your expansion mineral line and the edges of your army. No human can do this, and no human could keep up with the multitasking required to defend against it being done perfectly in so many locations.

  • Yes, I definitive believe it’s way harder then anything A.I has come across before. I also agree that A.I should be limited on the amounts of clicks it can do per second to compete on a overall tactical level rather then a micro manage battle one. All these major decisions the A.I has to make to compete on a higher level is why it’s so attractive for A.I research. And that so much data already exists for this game. With so much data, we are unsure of how much extra neutral net layers are needed besides the basic layers. If you were to play 65,000 games and remember them all, then there could already be a winning strategy to every type of match and every counter. We can’t really comprehend scales like that easily or what that might entail. We also don’t know if a lot of pro-players create winning strategies from just experiencing and playing starcraft with less imagination needed. Pro players achieve this faster, But A.I learns in completely different ways to humans learning. Ultimately I don’t think anyone really knows how hard/impossible/easy a A.I will finally beat a world champion on fair levels.

    • Yes but Ai fundamentally don’t work by APM. They work by script to give direct commands to the engine and replicate that behavior to every unit.

      For example a zergling siege tank splash avoidance ai would be coded like this:
      Event: unit starts attacking (or something like it, don’t remember what it’s called)
      Conditions: attacking unit is siege tank (sieged)
      And in the actions you put:
      “pick each unit in region (convrt circle to region; point; targeted unit, 2) and do: order unit to move targeting (point; targeted unit) offset by 2 towards angle between position of (targeted unit) and (picked unit) degrees.”

      • New A.I techniques don’t work like this anymore. These new neutral net algorithms are memory based. You programing in all the input and outputs they can do. Then it works out everything else by itself based on either trail and error or searching vast amounts of data. There is no scripting anymore for this type of A.I

  • I would sure love to have a good AI to play against. Most games have really stupid ai or rely on cheats to buff ai difficulties just to compete with humans.

    Last time I looked at the starcraft Ai project (in 2015), I remember the problem was that the starcraft api actually gave the Ai the ability to see through the fog of war. I don’t mind it too much as long if it can result in Ai playing better. However its unfair to have a comparison vs humans when one side can cheat while the other side can’t.

Show more comments

Comments are closed.

Log in to comment on this story!