Using a model of the environment and a value function, an agent can construct many estimates of a state's value, by unrolling the model for different lengths and bootstrapping with its value function. Our key insight is that one can treat this set of value estimates as a type of ensemble, which we call an \emph{implicit value ensemble} (IVE). Consequently, the discrepancy between these estimates can be used as a proxy for the agent's epistemic uncertainty; we term this signal \emph{model-value inconsistency} or \emph{self-inconsistency} for short. Unlike prior work which estimates uncertainty by training an ensemble of many models and/or value functions, this approach requires only the single model and value function which are already being learned in most model-based reinforcement learning algorithms. We provide empirical evidence in both tabular and function approximation settings from pixels that self-inconsistency is useful (i) as a signal for exploration, (ii) for acting safely under distribution shifts, and (iii) for robustifying value-based planning with a learned model.
翻译:使用环境和值函数模型, 代理商可以构建国家值的许多估计值, 方法是为不同长度的模型打开滚动, 并使用其值函数 。 我们的关键洞察力是, 可以将这组值估计值视为一种共合物, 我们称之为共合物 。 因此, 这些估计值之间的差异可以用作该代理商特征不确定性的替代物; 我们将这个信号 \ emph{ 模型值不一致} 或\ emph{ 自我不一致} 称为短信息 。 与先前通过培训许多模型和/ 或价值函数来估计不确定性的工作不同, 这种方法只需要在大多数基于模型的强化学习算法中已经学到的单一模型和价值功能。 我们从自相矛盾的像素中提供列表和功能近似值环境的经验证据( 一) 作为勘探的信号, (二) 在分销转移期间安全地采取行动, 以及 (三) 用所学的模型来巩固基于价值的规划。