Off-policy evaluation (OPE) attempts to predict the performance of counterfactual policies using log data from a different policy. We extend its applicability by developing an OPE method for a class of both full support and deficient support logging policies in contextual-bandit settings. This class includes deterministic bandit (such as Upper Confidence Bound) as well as deterministic decision-making based on supervised and unsupervised learning. We prove that our method's prediction converges in probability to the true performance of a counterfactual policy as the sample size increases. We validate our method with experiments on partly and entirely deterministic logging policies. Finally, we apply it to evaluate coupon targeting policies by a major online platform and show how to improve the existing policy.
翻译:外部政策评价(OPE)试图利用不同政策的日志数据预测反事实政策的执行情况。 我们通过开发一种在背景地带的全力支持和不足支持性伐木政策类别中采用OPE方法来扩大其适用性。 该类包括确定性的土匪(如高信任区)以及基于监督和不受监督的学习的确定性决策。 我们证明我们的方法预测在抽样规模增加时与反事实政策的真实执行情况相近。 我们用部分和完全确定性的伐木政策实验来验证我们的方法。 最后,我们运用它来评估一个主要在线平台的准政变政策,并展示如何改进现有政策。