This note clarifies some confusions (and perhaps throws out more) around model-based reinforcement learning and their theoretical understanding in the context of deep RL. Main topics of discussion are (1) how to reconcile model-based RL's bad empirical reputation on error compounding with its superior theoretical properties, and (2) the limitations of empirically popular losses. For the latter, concrete counterexamples for the "MuZero loss" are constructed to show that it not only fails in stochastic environments, but also suffers exponential sample complexity in deterministic environments when data provides sufficient coverage.
翻译:暂无翻译