We study the problem of model selection in batch policy optimization: given a fixed, partial-feedback dataset and $M$ model classes, learn a policy with performance that is competitive with the policy derived from the best model class. We formalize the problem in the contextual bandit setting with linear model classes by identifying three sources of error that any model selection algorithm should optimally trade-off in order to be competitive: (1) approximation error, (2) statistical complexity, and (3) coverage. The first two sources are common in model selection for supervised learning, where optimally trading-off these properties is well-studied. In contrast, the third source is unique to batch policy optimization and is due to dataset shift inherent to the setting. We first show that no batch policy optimization algorithm can achieve a guarantee addressing all three simultaneously, revealing a stark contrast between difficulties in batch policy optimization and the positive results available in supervised learning. Despite this negative result, we show that relaxing any one of the three error sources enables the design of algorithms achieving near-oracle inequalities for the remaining two. We conclude with experiments demonstrating the efficacy of these algorithms.
翻译:我们研究分批政策优化的模型选择问题:考虑到固定的、部分反馈的数据集和美元模型类,我们学习一项业绩与最佳模型类的政策具有竞争力的政策。我们通过找出三个错误来源,确定任何模式选择算法都应最佳权衡,以便具有竞争力,从而将三个错误来源与线性模型类的情况正式化:(1) 近似错误,(2) 统计复杂性和(3) 覆盖面。前两个来源在监督学习的模型选择中是常见的,在这些属性的最佳交易中,这些属性得到了很好的研究。相比之下,第三个来源是分批政策优化独有的,并且是由于设定所固有的数据集变化。我们首先表明,任何分批政策优化算法都不能同时实现对所有三种情况的保证,这表明在分批政策优化方面的困难与监督学习的积极成果之间存在鲜明的对比。尽管这一负面结果,但我们表明,放松三个错误源中的任何一种都能够使算法的设计达到其余两个的近乎临界的不平等。我们最后用实验来证明这些算法的功效。