Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett's Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman's equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.
翻译:最终游戏研究长期以来一直是测试人类创造力和智能的工具。 我们发现它们也可以作为测试机器能力的工具。 两个领先的象棋引擎Stockfish和Leela Chess Zero(LCZero)在游戏中采用了截然不同的方法。 我们用1970年代后期著名的最终游戏研究 Plaskett 的拼图来比较这两个引擎。 我们的实验显示,在拼图上,小鱼比LCZero要好。 我们研究了引擎之间的算法差异,并使用我们的观察作为仔细解释测试结果的基础。 从人类如何解决象棋问题的角度汲取灵感,我们问机器是否拥有想象力。 在理论方面,我们描述了如何应用贝尔曼的方程式来优化获胜的概率。 最后,我们讨论了我们关于人工智能(AI)和人造一般情报(AGI)的工作的影响,提出了未来研究的可能途径。