Machine 2, human 0 in Go series

Home > >

print dictionary print

Machine 2, human 0 in Go series

테스트

Lee Se-dol, left, a Go champion competing against Google DeepMind’s computer program AlphaGo, and Demis Hassabis, CEO of Google DeepMind, speak to the press after the second match in a series of five at the Four Seasons Hotel in central Seoul on Thursday. [NEWSIS]

Google DeepMind’s computer algorithm AlphaGo once again beat world Go champion Lee Se-dol on Thursday, the second game of a five-match tournament pitting human against the most advanced artificial intelligence.

“I am almost speechless,” Lee said at a press conference after the high-profile match. “I was completely defeated. I had zero moments that made me think I was ahead of AlphaGo. I found nothing particularly strange about the system this time, though we could spot some problems in the first round. AlphaGo played a perfect game.”

Yoo Chang-hyuk, a 9-dan professional player and commentator, also said he was “surprised” to see Google DeepMind’s self-evolving program play “so impeccably” on Thursday.

After losing two out of two games, Lee admitted it won’t be easy for him to win in the third match scheduled for tomorrow.

“I will do my best,” he said.

Should the man-made system beat the 33-year-old board game master for a third time, it will win the match. But the “match of the century” will go on to complete all five games because a key goal of the Google DeepMind Challenge Match is to gather data and test AlphaGo’s improved ability over the past five months since it beat European champion Fan Hui five to zero.

AlphaGo was given black stones to start the second match at 1 p.m. Thursday, which took place at the Four Seasons Hotel in central Seoul. The atmosphere was tense for almost four and a half hours.

Unlike the previous day, the players used the entire four-hour play time and the match went into overtime. Lee exhausted his two-hour limit earlier than AlphaGo. That’s because Lee applied a different strategy from the day before, when AlphaGo spent more time before making a move. Lee showed utmost prudence when placing the stones. He made extremely safe choices with as few provocations or errors as possible. AlphaGo followed a similar tactic.

The Korean master of Go, or baduk, started using byoyomi, a Japanese word for extensions, at 4:41 p.m. and AlphaGo followed suit at 5:18 p.m. Each player was given three rounds of one-minute byoyomi. During the extra period, the player is supposed to make a move in one minute. The match ended at 5:26 p.m. when Lee gave up, as he did a day earlier. He saw no way to beat the computer system after 211 moves.

The fact that artificial intelligence defeated one of the best Go players on Wednesday sent a shock wave across the Go community and among the merely curious around the world. Demis Hassabis, founder and CEO of the Google-acquired DeepMind, compared AlphaGo’s victory to “landing on the moon.”

Its victories in the first two matches show AlphaGo has evolved more quickly than expected. An article in the journal Nature in January described AlphaGo as having a skill level of 4 to 5 dan when it was playing against European Go champion Fan Hui last August. The algorithm has leapfrogged to the ranks of 9 dan in just five months, according to Kim Dae-shik, a professor of brain engineering at the Korea Advanced Institute of Science and Technology.

The next challenge for AlphaGo will be to beat the best player of Starcraft, a highly complicated military science fiction game franchise from Blizzard Entertainment in the United States, according to Jeff Dean, Google’s senior fellow in the systems infrastructure group.

DeepMind chose Go, a 3,000-year-old game first played in China, because it is regarded as much harder than chess. There are an estimated 10 to the power of 700 possible ways the game can be played, which scientists say is more than the number of atoms in the universe. That is why the five-day match between Lee and AlphaGo has been dubbed a “grand challenge” for AI.

AlphaGo, a generic self-learning algorithm, works differently from traditional AI methods, which build a search tree over all positions possible. The DeepMind algorithm combines an advanced tree search with deep neural networks that consist of two different types.

One type, called the “policy network,” is used to choose the next move to play, whereas the second type, the “value network,” functions to predict the winner of the game. Those networks process the input, or the description of the Go board, through 12 different network layers that contain millions of neuron-like connections.

The next game is scheduled for 1 p.m. tomorrow, and the final two will take place at the same hour on Sunday and Tuesday.

BY SEO JI-EUN [[email protected]]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)