Attention: You are viewing archived content. The information may be outdated and links may no longer work.

Source: NEWS from College of Natural Sciences

First the robots successfully challenged the chess masters, then the Jeopardy champions. Now comes a match-up for a new generation.

AlphaGo, a program that plays what many consider the most difficult of board games, Go, has just won the first of five matches against the world's top human player. The series is scheduled to continue through March 12. Developed by Google's DeepMind subsidiary, AlphaGo has already beaten the European Go champion. A few days before the latest competition, we asked Risto Miikkulainen, an artificial intelligence researcher at the University of Texas at Austin, for his thoughts on this historic contest.

Q: So is this just the next inevitable step up from computers that can beat humans at games like chess?

It's very different from playing chess. Those other systems were based on old-fashioned artificial intelligence, where you search all the possible moves to find the best ones. Go was always a huge challenge because it has too big of a search space to evaluate in a reasonable amount of time. There are more possible moves than atoms in the universe. So AlphaGo is completely different. Like humans, it uses pattern recognition.

Q: Do I hear a bit of admiration in the way you talk about this advance?

It's a huge step forward. Until very recently, we thought mastering Go would be decades away. That Google could do it so soon is remarkable and a bit surprising.

Q: What kinds of insights were needed to make such a robust program?

Google brought together two ideas in artificial intelligence that have actually been around since the 1990s, called Monte Carlo tree search (MCTS) and deep learning. MCTS is a way of taking a really big search space and narrowing it down to a manageable search. Instead of playing out every possible scenario from start to finish, it tests a random sample of moves and plays them out to maybe 20 moves in the future. Deep learning is similar to the way a human baby learns. Instead of being pre-programmed with a bunch of rules and facts, the program observed many, many games played by human experts and over time, discovered useful patterns. If I were working on this problem, I would have used similar techniques.

Q: Do you think AlphaGo will win the series of 5 matches?

Whether it wins or loses now doesn't matter, it will ultimately win. If they lose, they will bring in new techniques to make it better. So whether it's next week or next year, they will eventually beat the best humans.

Q: Have you placed bets on the competition?

AlphaGo won over the European Go champ resoundingly, but the world champion is much stronger. If the match against the European champ had been close, my money would be on the world champion. But it wasn't.

I think the most interesting result would be if the matches are very close. It would force people to think, what is the program doing well and what is missing? I think we would make the most progress that way. If it's a lopsided victory, people might think it's all over and there's no need to keep refining it.

Q: Besides playing games, what are some real-world applications of this kind of artificial intelligence software?

Really any kind of problem that involves very big search spaces and where a pattern-based approach might work. It could help in medical diagnostics. It could help with scheduling and logistics. It might also have applications in image, speech or face recognition.

Learn more about Risto Miikkulainen's research:

Video: Artificial Evolution, The Hook

Computer Scientists Find Mass Extinctions Can Accelerate Evolution, CNS News

Artificially Intelligent Game Bots Pass the Turing Test on Turing's Centenary, CNS News

News categories: