AlphaGo creator remains confident
Hassabis met with the JoongAng Ilbo at the company’s headquarters in London, where he showed his strong faith in artificial intelligence and its ability to contribute to humanity.
“We’ve been training AlphaGo the way we trained it for the Fan Hui game,” said Hassabis. “But we’ve been using more time and processing, in trying to make the neural network stronger and more powerful. Actually, the way we train AlphaGo is based on millions and millions of positions, both those of human experts and also the system’s self-play. Although Lee Se-dol’s games are of high quality, there are only a few to look at. So we don’t make specific use of Lee Se-dol’s own particular position to tailor up the system. We just try to make the strongest possible program.”
In response to Go professionals betting Lee Se-dol will win the match, Hassabis agreed at first, but quickly added this is only true when analyzing the match against Fan Hui in October.
But Hassabis admitted that even AlphaGo hasn’t always been perfect. AlphaGo has played 500 matches against two other top Go playing programs, Crazy Stone and Zen, and lost in one of them.
“AlphaGo is based on Monte-Carlo search and Monte-Carlo search realizes its randomness in the simulation in which it is formed. It is a search where it tries out different moves at different times and sees how they perform, and it will make its decision as a result of these random moves. So as it searches, there is a small chance it will make a mistake. It’s possible that over many, many simulations it will actually draw the wrong conclusion just because of random chance,” Hassabis said. “So over the course of 500 games it’s likely that just once it would make one very bad mistake. But it’s very unlikely as we started to search deeper and make the neural network even stronger. It’s less likely that AlphaGo will make a mistake.”
Hassabis said DeepMind chose Go because it’s the “most challenging amongst all classic board games.”
“Chess is mostly about calculations. Computers of course showed that they are good 20 years ago, better than the best human,” Hassabis said. “Go is more of an intuitive game and you need your intuition to play. Computers traditionally have been less good at active intuitive tasks. Go hasn’t been cracked so far by a computer and that’s the other reason why we chose this challenge. In Go, you need to combine pattern recognition with a plan.”
Hassabis said he agrees that artificial intelligence will continue to develop.
“If we look at the rate of progress and what we’ve seen in computers with AlphaGo coming on the scene and project that forward, I think it would be hard to say it will be very long until computers in general becomes stronger than humans,” said Hassabis. “Not just AlphaGo, but looking at the success in deep learning and other areas. Machine learning and artificial intelligence research progress is very rapid at the moment. It seems to be only a matter of time now until we’ll see a program that’s stronger than humans.”
However, he doesn’t agree that artificial intelligence will pose a threat to mankind.
“I think technology itself is neutral and it depends on how humans decide to use it. It will decide whether it becomes good or bad,” Hassabis said. “And I think AI is no different. It is a tool that we can use as humanity and society for either very good things or very bad things. “We want to use it for positive things, science and health, and maybe things like robots that can care for people, rather than for military application.”
He believes that the practical applications of AlphaGo are limitless.
“It uses deep learning. It uses reinforcement learning. It uses Monte-Carlo tree search. All of these three things are very general purpose algorithms,” Hassabis said. “So we hope to extend AlphaGo and some of the components that we have built for AlphaGo in order to use them for all kinds of real world applications in science, health care and also in robotics.”
For the immediate future, the CEO of DeepMind, who has Ph.D.s in neuroscience and computer science, is hoping to once again tap into the human mind for inspiration.
“For future work we will look at adding memories, like the episodic memories humans have, to our systems,” Hassabis said. “We think a lot about how humans learn and get inspiration from that for our algorithms.”
BY SOHN HAE-YONG [firstname.lastname@example.org]