In many RL applications, once training ends, it is vital to detect any deterioration in the agent performance as soon as possible. Furthermore, it often has to be done without modifying the policy and under minimal assumptions regarding the environment. In this paper, we address this problem by focusing directly on the rewards and testing for degradation. We consider an episodic framework, where the rewards within each episode are not independent, nor identically-distributed, nor Markov. We present this problem as a multivariate mean-shift detection problem with possibly partial observations. We define the mean-shift in a way corresponding to deterioration of a temporal signal (such as the rewards), and derive a test for this problem with optimal statistical power. Empirically, on deteriorated rewards in control problems (generated using various environment modifications), the test is demonstrated to be more powerful than standard tests - often by orders of magnitude. We also suggest a novel Bootstrap mechanism for False Alarm Rate control (BFAR), applicable to episodic (non-i.i.d) signal and allowing our test to run sequentially in an online manner. Our method does not rely on a learned model of the environment, is entirely external to the agent, and in fact can be applied to detect changes or drifts in any episodic signal.
翻译:在许多 RL 应用程序中, 一旦培训结束, 就必须尽快发现代理性性能的任何恶化。 此外, 通常必须在不修改政策的情况下, 在环境最低假设之下 。 在本文中, 我们通过直接关注退化的奖励和测试来解决这个问题 。 我们考虑一个分化框架, 每集的奖励不是独立的, 也不是完全分布的, 也不是Markov 。 我们把这个问题作为一个多变平均移动的检测问题, 并可能进行部分观察 。 我们定义了平均移动方式, 与时间信号( 如奖赏) 的恶化相对应, 并以最佳的统计能力来测试这一问题 。 在控制问题( 利用各种环境改变产生) 中, 测试的强度比标准测试要小得多。 我们还建议了一个新的假警报率控制诱饵机制( BFAR ), 适用于 cindodic (n-i. d) 信号, 并允许我们的测试以在线方式按顺序进行。 我们的方法并不依靠在环境变化中( ) 学习的模型或感官性变化中完全外部 。