Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior -- i.e. trajectories of observations and actions made by an expert maximizing some unknown reward function -- is essential for introspecting and auditing policies in different institutions. In this paper, we propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes: Given the current history of observations, what would happen if we took a particular action? To learn these cost-benefit tradeoffs associated with the expert's actions, we integrate counterfactual reasoning into batch inverse reinforcement learning. This offers a principled way of defining reward functions and explaining expert behavior, and also satisfies the constraints of real-world decision-making -- where active experimentation is often impossible (e.g. in healthcare). Additionally, by estimating the effects of different actions, counterfactuals readily tackle the off-policy nature of policy evaluation in the batch setting, and can naturally accommodate settings where the expert policies depend on histories of observations rather than just current states. Through illustrative experiments in both real and simulated medical environments, we highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
翻译:根据所显示的行为 -- -- 即对一位专家的观察和行动的轨迹,最大限度地增加一些未知的奖赏功能 -- -- 建立真实世界决策的可解释参数,对于不同机构进行深视和审计政策至关重要。在本文中,我们建议从“如果”结果的偏好的角度,对专家决定的奖赏功能进行模型化的学习解释:鉴于目前的观察历史,如果我们采取与专家行动有关的这些成本效益取舍,将会发生什么情况?为了了解这些与专家行动有关的成本效益取舍,我们将反事实推论纳入分批反强化学习。这提供了一种确定奖赏职能和解释专家行为的原则性方法,也满足了现实世界决策的制约因素 -- -- 在通常不可能进行积极实验的地方(例如保健领域),我们建议从“如果”结果的偏好的角度来看待他们的奖赏功能。此外,通过估计不同行动的效果,反事实很容易解决批量环境中政策评价的离政策非政策性质,并且自然能够适应专家政策依赖于观察历史而不是仅仅取决于当前状况的环境。通过在真实和模拟的医疗环境中进行说明性实验,我们强调可加强行为的方法的有效性。