World’s Best Go Player Describes Google’s AI As A God

World’s Best Go Player Describes Google’s AI As A God

This week, humanity’s last line of Go defence against machine learning sat down to see whether puny humans could hold their own.

Surprise: we can’t.

Ke Jie, the 19-year-old professional who currently holds the title as the best Go player on the planet, kicked off his three-match exhibition against Google’s AlphaGO AI earlier this week. And the signs weren’t good: earlier in the year AlphaGo, under the online moniker “Master”, flogged Ke Jie in some speed Go matches and everyone else as well.

Someone Is Destroying Online Go, And Nobody Knows Who It Is

Right now, there's a player lurking in the depths of the online Go scene that is laying waste to some of the best players in the world. It's called Master, and nobody knows who it is.

Read more

In an indication of just how much a mark AlphaGo has impacted the world of professional Go, Ke tried deploying some of the moves AlphaGo showed off during its online rampage in January. It’s certainly a far cry from the big game Ke was talking last year, where he believed Lee Sedol would wallop AlphaGo 5-0.

AlphaGo’s victory over Ke, however, was a fairly tight-knit affair. The two were evenly matched in the opening exchange, until AlphaGo’s (playing as white) machine learning opted to take Ke’s (black) corner in the 42nd move. And Demis Hassabis, a co-founder of AlphaGo’s creator Deepmind, told reporters that AlphaGo didn’t really have a statistical lead until after the 50th move.

But once it took hold, humanity was buggered. AlphaGo slowly extended its advantage until Ke opted to resign, just over four hours after the match began. Another Deepmind employee added during the post-match press conference that AlphaGo was configured primarily to win, rather than maximise the size of its victory, and it played moves to minimise any risk of a comeback.

You can go through the match below via the recorded livestream, or move by move here.

Ke told reporters afterwards that AlphaGo was “like a god of a Go player”, compared to last year when the AI was “still quite humanlike”. To make matters even more depressing, Stephanie Yin, a 1-dan professional Go player, added that Google’s AI played moves that professionals couldn’t anticipate. “Some of the moves that AlphaGo played are not based off our human studies. Those moves completely overthrow the basic knowledge of Go.”

Ke will have two more chances later this week to restore humanity’s reputation against machine learning later this week. The second match will kick off around 12.30 PM on Thursday AEST, with the third and final match kicking off the same time on Saturday.


  • Yeah, but AlphaGO is only a badass at playing Go. That’s all it does. Ke Jie can kick its arse at everything else. So can you and me. Humanity wins.

  • Just as with anything where there is a finite number of possible moves, the AI can calculate the probability of winning based on any given move being made. Instead of thinking several moves ahead, the AI can analyse virtually the whole game in advance. Making moves is merely cutting down on the number of calculations required for possible future moves. The AI can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.

    • And that’s why something like StarCraft sounds like a much better candidate for an AI system to flex their muscle. We know there is bots out there that can do some pretty nice micro work in SC but to have a system that looks at the interface and has to play like a person would be a really great test given how complex the interactions and thought processes are in that game.

      • graphical interface is for humans, AI do not need a graphical interface. they are just codes. Starcraft is not that complex. There are only so many viable options for each race.

        • And that’s why I would say it would be a good challenge. Starcraft is horribly complex. Forcing it to play like a human is a good test as you can’t see everything like in chess or go. The AI would need to learn to play and predict based on normal visual queues from scouting and deal with feints and misdirection. Using normal input systems like a mouse and keyboard would be another challenge although emulated to avoid physical control is another huge feat.

          There is no one winning formula in Starcraft, it’s all rock paper scissors hidden by the fog of war.

    • Actually, the whole reason this is such a big deal is that simulating the entire game is impossible due to the number of possible outcomes. The oft-quoted line is that there are 10^170 possible Go games and 10^80 atoms in the universe.

      • You’d still agree that the AI can’t ‘intuit’ so the whole thing is just a calculation of probability based on given moves, possible next moves and historical patterns. It’s basically the same thing that humans do but faster, with more depth, and with an unfailing memory.

        • It kind of can. It doesn’t (can’t) calculate explicit probabilities based on a future move tree like a chess algorithm would; instead its network is trained to recognise board layouts as “good” or “bad” based on experience with past games. This is actually much more like a human player’s intuition, which is quite unlike past game AI.

    • It’s really not that simple – the decision space for Go is just too vast. AI players don’t look too far further forward than human players – in AlphaGo’s case I believe it’s been trained to recognize an play toward certain game states that give it the best chance of winning.

      • I think they said it was playing not to win but to ‘not lose’. Just like you can play in noughts and crosses.

Show more comments

Comments are closed.

Log in to comment on this story!