tech
March 14, 2026
Figuring out why AIs get flummoxed by some games
When winning depends on intuiting a mathematical function, AIs come up short.

TL;DR
- Google's DeepMind AIs, successful in games like chess and Go, struggle with impartial games such as Nim.
- Impartial games share pieces and rules between players, and their positions can be represented by a Nim pyramid configuration.
- Winning in Nim relies on evaluating a parity function to determine the optimal moves, a form of symbolic reasoning.
- The training method used for AlphaGo, based on repeated self-play and association, is ineffective for learning the parity function in Nim.
- For larger Nim boards, the AI's performance improvement stops, and its evaluations of potential moves become indistinguishable from random.
- This failure highlights that AI excels at learning through association but falters when symbolic reasoning is required.
- Similar issues, though less pronounced, may affect chess and Go AIs, as indicated by 'wrong' moves identified by evaluators.
Continue reading the original article