|World Champion Go Player Challenges AlphaGo|
|Written by Sue Gee|
|Tuesday, 07 June 2016|
Ke Jie, currently ranked as the best human Go player in the world, is hoping to prove that he is the overall best Go player by taking on Google Deep Mind's AlphaGo.
This news, in which the proposed match is described as the "ultimate man-machine war" comes from the Xinhua News Agency, the official press agency of the People's Republic of China, reporting a statement made by Yang Jun’an an executive member of the International Go Federation. The Chinese press release states that no date and location have been fixed, but that the match is expected to take place within the year. The may however be premature since Demis Hassabis, head of the Deep Mind project, has already posted a tweet that casts doubt about it happening:
The idea that an official decision will be announced on Twitter seems bizarre but it does seem to be the method of communication that Hassabis prefers.
AlphaGo has already beaten world-class Go Players, but when it triumphed over Lee Seedol in March 2016, see AlphaGo Beats Lee Sedol Final Score 4-1, Lee was only ranked number 4.
Ke, who is currently ranked number 1, became a Go pro aged 10 and is still only 18 years old. he was initially dismissive of AlphaGo's abilities. Posts on his Weibo account after Lee lost the first game of the series of five read:
“Even if AlphaGo can defeat Lee Se-dol, it can’t beat me."
“I don’t want to compete with AlphaGo because judging from its matches with Lee, AlphaGo is weaker than me. I don’t want AlphaGo to copy my style."
After AlphaGo had taken three games in a row from Lee, Ke changed his attitude:
“AlphaGo was perfect and made no mistake. If the conditions are the same, it is highly likely that I can lose.”
AlphaGo's win was indeed a game changer, not just for Go but for AI in general, see Why AlphaGo Changes Everything, leaving us wondering why the Deep Mind project had managed to make such rapid progress when it was universally accepted that Go was the most complex of games and one not capable of a brute force algorithmic solution such as that used by Chess playing programs.
Although it was known that the number of legal moves in Go was a very large number, the exact number for a 19 by 19 stone board was only recently computed as being:
2081681993819799846 9947863334486277028 6522453884530548425 6394568209274196127 3801537852564845169 8519643907259916015 6281285460898883144 2712971531931755773 6620397247064840935
According to Peter Norvig, Demis Hassabis, head of Google's Deep Mind team has pointed out that:
There are more possible Go positions than there are atoms in the universe.
If you find this difficult to believe Norvig goes on to tell us:
A reduced 13 × 13 Go board also has about as many positions as the number of atoms in the universe; the full 19 × 19 board has 1090 times more. So saying that there are more Go positions than the number of atoms in the universe is a bigger understatement than saying the national debt is more than a penny.
Recently we reported on Google's secret weapon that has enabled it to create a machine capable of learning how to play Go like a human player and at the same time surpassing human capabilities. We already knew that AlphaGo used Google's TensorFlow machine learning, what we hadn't known about was the use of Tensor Processing Units, TPUs, custom Application-Specific Integrated circuits (ASICs) specific to TensorFlow. TPUs introduce speed into machine learning, overcoming the bottleneck of the huge computing power it takes to train and use big deep neural networks.
The fact that there's a picture of a Go board on the side of this TPU rack gives the clue that AlphaGo's success might just be based on both hardware and software.
Picture Credit: Goban1
or email your comment to: firstname.lastname@example.org
|Last Updated ( Tuesday, 11 April 2017 )|