Most people are familiar with the story and background behind Deep Blue, and AIs have been mastering quizzes, board games and computer games for a decades. But there’s always been one that has proven to be trickier than the rest: the ancient board game of Go.
Go’s a fascinating game, and the many millions of possible permutations makes things tricky for a computer. But Google, of all companies, has finally conquered that last barrier by slapping the reigning three-time European Go champion around the board.
Image courtesy of Shutterstock
It should be stressed that Fan Hui isn’t just the current European Go champion, but the three-time reigning champion. According to Google’s Research blog, Hui has devoted his life to the ancient Chinese conundrum since he was 12.
Didn’t stop AlphaGo — Google’s plain name for their bot — from flogging Hui 5-0, mind you.
Flush from their success, Google’s pitting AlphaGo against an even stronger foe: Lee Sedol. Sedol’s considered the best Go player in the world over the last decade, so it’ll be thrilling to see how well AlphaGo fares against what is clearly a substantial step up in opposition.
“We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI,” Google bragged. “However, the most significant aspect of all this for us is that AlphaGo isn’t just an ‘expert’ system built with hand-crafted rules, but instead uses general machine learning techniques to allow it to improve itself, just by watching and playing games.”
“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems. Because the methods we have used are general purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis.”
Personally, it’d be cool if Google took AlphaGo and released it as a free mobile app. But then I watched the entirety of Hikaru No Go — it’s an anime about, you guessed it, Go — and its specials three times over, so maybe that’s just me.
Comments
3 responses to “Google Created An AI Smart Enough To Beat A Professional Go Player”
I keep telling Westerners that Go is an incredibly complex game, and that in terms of possibilities, it makes chess look kind of easy. With 361 individual positions on the board, and each stone (or piece) affecting every other stone and position on the board, it is ridiculously complex at the top most level. You can play a move which has implications 100 moves down the track.
Another thing Westerners struggle with in Go is that it is not about capturing stones. It is about controlling more territory than your opponent. If they have more stones captured but less area, then they are likely to lose. Chess is all about killing the King. It does not matter what else happens, if you put your opponent in a position where their king has to die (checkmate) before yours you win.
In Go, every piece is theoretically the same. The position of each stone in relation to each other stone and board position changes them, but it is in a very subtle way, and hard to give a specific numeric value of.
It just is very hard to make a good AI of. I’m impressed they have got this far.
One last thing that a lot of Westerners do not know about Go. It is huge in the East. Professional Go players can earn over $1 million per year. South Korea has a television channel that shows Go stuff 24 hours a day. The professional institutions in charge of this are huge. It’s also several centuries older than chess, and as I alluded to by the salary, you can make a full time career out of it in the right country with enough skill.
I love it, even though I am only a lowly amateur. Hikaru no Go, the magna / anime mentioned at the end of the article, has more about Go as a game explained from an entertainment point of view. 🙂
What’d be most interesting is seeing what the hardware they have behind this program is. Chess Programs based around logic calls to massive move databases were the foundation of successful chess programs, and boil down to throwing computing power at the problem largely. If the Goog has tackled Go not only by implementing machine learning, but by doing it with a hardware independent approach, that’d be revolutionary.
This however is probably why you’ll never see a mobile app, Chess programs broke the glass ceiling of computing power at the turn of the millennium and run on mobile. Go has infinitely more squares than a chessboard, thus is infinitely more complex, they’ve prob run the program on a serious rig.
Just pointing out that I don’t think glass ceiling means what you think it does 😛
Also, I would think a learning AI would be considerably more efficient and processor friendly than a database crunching giant like Big Blue. I won’t bet on it but I wouldn’t be surprised if Google’s Go algorithm worked fine on modern mobile phones.