We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions, often termed slates. The problem is common to recommender systems and user-interface optimization, and it is particularly challenging because of the combinatorially-sized action space. Swaminathan et al. (2017) have proposed the pseudoinverse (PI) estimator under the assumption that the conditional mean rewards are additive in actions. Using control variates, we consider a large class of unbiased estimators that includes as specific cases the PI estimator and (asymptotically) its self-normalized variant. By optimizing over this class, we obtain new estimators with risk improvement guarantees over both the PI and self-normalized PI estimators. Experiments with real-world recommender data as well as synthetic data validate these improvements in practice.
翻译:我们从分批的背景土匪数据中研究非政策性评价问题,这些数据往往被称为“板块 ” 。 这个问题在推荐系统和用户界面优化方面是常见的,而且由于组合规模的行动空间,这一问题尤其具有挑战性。 Swaminathan等人(2017年)在假定有条件平均回报是行动添加的假设下提出了假伪伪伪(PI)估计。我们使用控制变量,认为大量不带偏见的估测者,包括PI估计和(暂时)其自我规范变异的具体案例。通过优化这一类,我们获得了对PI和自我规范的PI估测者具有风险改进保证的新估计者。 与现实世界推荐者数据以及合成数据实验在实践上证实了这些改进。