We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions, often termed slates. The problem is common to recommender systems and user-interface optimization, and it is particularly challenging because of the combinatorially-sized action space. Swaminathan et al. (2017) have proposed the pseudoinverse (PI) estimator under the assumption that the conditional mean rewards are additive in actions. Using control variates, we consider a large class of unbiased estimators that includes as specific cases the PI estimator and (asymptotically) its self-normalized variant. By optimizing over this class, we obtain new estimators with risk improvement guarantees over both the PI and the self-normalized PI estimators. Experiments with real-world recommender data as well as synthetic data validate these improvements in practice.
翻译:我们从分批的背景土匪数据中研究非政策性评价问题,这些数据往往被称为“板块 ” 。 这个问题在推荐系统和用户界面优化方面是常见的,而且由于组合规模的行动空间,这一问题尤其具有挑战性。 Swaminathan等人(2017年)在假定有条件平均回报是行动添加的假设下提出了假伪伪伪(PI)估计。我们利用控制变量,考虑大量不偏袒的估算器,其中将PI估计器和(简单地)其自我规范的变式作为具体案例。我们通过优化这一类,获得了新的估算器,对PI和自我规范的PI估计器都提供了风险改进保证。 与现实世界建议数据以及合成数据实验在实践上证实了这些改进。