This paper introduces a new principled approach for offline policy optimisation in contextual bandits. For two well-established risk estimators, we propose novel generalisation bounds able to confidently improve upon the logging policy offline. Unlike previous work, our approach does not require tuning hyperparameters on held-out sets, and enables deployment with no prior A/B testing. This is achieved by analysing the problem through the PAC-Bayesian lens; mainly, we let go of traditional policy parametrisation (e.g. softmax) and instead interpret the policies as mixtures of deterministic strategies. We demonstrate through extensive experiments evidence of our bounds tightness and the effectiveness of our approach in practical scenarios.
翻译:本文介绍了一种针对背景强盗的离线政策优化的新原则性办法。 对于两个公认的风险估计者,我们建议采用新的概括性办法,以便能够有信心地改进离线伐木政策。与以往的工作不同,我们的方法不需要调整搁置装置的超参数,而是在没有A/B测试的情况下进行部署。这是通过PAC-Bayesian镜头分析问题来实现的;主要通过PAC-Bayesian镜头分析问题;我们放弃传统的政策平衡(例如软体),而是将政策解释为确定性战略的混合体。我们通过广泛的实验证明我们的做法的界限很紧,在实际情况下我们的方法是有效的。