We provide a new perspective to understand why reinforcement learning (RL) struggles with robustness and generalization. We show, by examples, that local optimal policies may contain unstable control for some dynamic parameters and overfitting to such instabilities can deteriorate robustness and generalization. Contraction analysis of neural control reveals that there exists boundaries between stable and unstable control with respect to the input gradients of control networks. Ignoring those stability boundaries, learning agents may label the actions that cause instabilities for some dynamic parameters as high value actions if those actions can improve the expected return. The small fraction of such instabilities may not cause attention in the empirical studies, a hidden risk for real-world applications. Those instabilities can manifest themselves via overfitting, leading to failures in robustness and generalization. We propose stability constraints and terminal constraints to solve this issue, demonstrated with a proximal policy optimization example.
翻译:暂无翻译