Google has challenged China’s top Go player to a series of games against its artificial intelligence technology.
It said the software would play a best-of-three match against Ke Jie, among other games against humans in the eastern Chinese city of Wuzhen from 23-27 April.
Last year, the Google program recorded a 4-1 victory against one of South Korea’s top Go players.
One expert said that result had come as a surprise.
“A lot of AI researchers have been working on Go because it’s the most challenging board game we have,” said Calum Chace, author of Surviving AI.
“The conventional wisdom was that machines would ultimately triumph but it would take 10 years or so.
“The win was a big wake-up call for a lot of people, including many outside the AI community.”
Google’s AlphaGo software was developed by British computer company DeepMind, which was bought by the US search firm in 2014.
Its defeat of Lee Se-dol in March 2016 is seen as a landmark moment, similar to that of IBM’s Deep Blue AI beating Garry Kasparov at chess in 1997.
Several of the moves AlphaGo made defied conventional wisdom but ended up paying off.
However, many Go aficionados did not recognise Mr Lee as the world’s top player at the time of the contest.
So, the new competition against 19-year-old Mr Ke – who is the current number one according to a popular but unofficial player-ranking system – has the potential to bring additional prestige to Google.
“We’ve been hard at work improving AlphaGo to become even more creative, and since playing Lee Se-dol, the program has continued to learn through self-play training,” a spokeswoman for DeepMind told the BBC.
“We intend to publish more scientific papers in the future, which will include further details of AlphaGo’s progress.”
Google added that Mr Lee would also be invited, but was not sure if he would attend.
In addition to the games against Mr Ke, AlphaGo will also:
play games involving one Chinese pro facing off against another, each of whom will have an AlphaGo-powered virtual teammate challenge a five-person team containing some of China’s top players, who will work together to try to beat the AI
Over the past year, DeepMind’s technology has also been used to find ways to reduce energy bills at Google’s data centres as well as to try to improve care in British hospitals.
A fresh wave of positive publicity could help Google find further uses for its tech.
“If it loses this match, a lot of people will be delighted to claim that Google and DeepMind has overpromised and that this is the kind of hype we always get with AI,” commented Mr Chace.
“But I wouldn’t have thought Google is taking a huge risk.
“[In recent months] it has been playing a lot of very, very good AlphaGo players online without disclosing it was an AI playing, and has won all of the games.
“Unless this Ke Jie is some magnitude better than Lee Se-dol, I would think they are confident of winning.”
What is Go?
Go is thought to date back to several thousand years ago in China.
Using black-and-white stones on a grid, players gain the upper hand by surrounding their opponent’s pieces with their own.
The rules are simpler than those of chess, but a player typically has a choice of 200 moves, compared with about 20 in chess – there are more possible positions in Go than atoms in the universe, according to DeepMind’s team.
That means a computer cannot win simply via brute force – searching through the consequences of millions of moves in seconds.
It can be very difficult to determine who is winning, and many of the top human players rely on instinct.
To prepare for its victory over Lee Se-dol, DeepMind trained its software on 30 million expert moves and then set the machine to play against itself millions of times to get a sense of what strategies worked.
The result was that some of the innovative moves AlphaGo made in its landmark match were described as being “beautiful” and highly unusual by observers.