For the last few decades, computational neuroscientists have devoted substantial resources to improving the performance of artificial intelligence (AI) on classic games, including chess, checkers, backgammon, and Scrabble. Expert-level play by AI has been achieved largely via algorithms that test all possible combinations of moves and outcomes and choose the move combination that optimizes the score. This brute force strategy is computationally expensive, however, and whereas it may be feasible for games with relatively limited numbers of moves, it is difficult to extend to increasingly complex games. In the ancient Chinese game Go, the number of possible board configurations increases rapidly as the game progresses, making it exceedingly more complicated than chess, in which the number of possible board configurations decreases with time. Thus, although the IBM supercomputer Deep Blue defeated reigning chess world champion Garry Kasparov 10 years ago, human Go experts have consistently outperformed AI—until now.
Full text access is available to all readers