Modern chess engines significantly outperform human players and are essential for evaluating positions and move quality. These engines assign a numerical evaluation $E$ to positions, indicating an advantage for either white or black, but similar evaluations can mask varying levels of move complexity. While some move sequences are straightforward, others demand near-perfect play, limiting the practical value of these evaluations for most players. To quantify this problem, we use entropy to measure the complexity of the principal variation (the sequence of best moves). Variations with forced moves have low entropy, while those with multiple viable alternatives have high entropy. Our results show that, except for experts, most human players struggle with high-entropy variations, especially when $|E|<100$ centipawns, which accounts for about $2/3$ of positions. This underscores the need for AI-generated evaluations to convey the complexity of underlying move sequences, as they often exceed typical human cognitive capabilities, reducing their practical utility.
翻译:暂无翻译