This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
翻译:本文的脱线政策学习旨在利用事先(从固定或适应性演变中的行为政策)收集的观察结果,以便学习一种最佳的个体化决策规则,为特定人群带来最佳的总体结果。 现有的政策学习方法依靠的是统一的重叠假设,即探索所有个体特性的所有行动的倾向在离线数据集中受约束程度较低; 不同的是, 现有方法的性能取决于离线数据集中最差的情况倾向。 由于人们无法控制数据收集过程, 在许多情况下, 特别是当行为政策随着某些行动的倾向的减少而随着时间的变化而演变时, 这种假设可能是不现实的。 在本文中,我们提出了一种新的算法, 优化信心约束(LCBs), 而不是点估计。 LCB是利用对收集离线数据的行为政策的知识来构建的。 在不假定任何统一的重叠条件的情况下, 我们为我们算法的亚性设定一个数据自定义的上限, 仅仅取决于(i) 非周期性的变化, 在最佳政策中, 最优性的政策(LCBs) 和最优性分析(我们所收集的亚性) 最优性政策中, 的精度(我们所收集的精度) 的精度, 的精度, 的精度, 我们的精度, 的精度, 我们的精度, 的精度, 的精度, 我们的精度, 我们的精度的精度, 的精度, 我们的精度, 的精度, 的精度, 的精度, 的精度, 的精度, 的精度, 的精度, 精度, 我们的精度分析, 的精度, 的精度, 的精度, 的精度, 的精度, 我们的精度, 我们的精度, 我们的精度, 我们的精度, 的精度, 我们的精度, 的精度, 的精度, 的精度, 我们的精度, 我们的精度, 我们的精度, 我们的精度, 我们的精度, 我们的精度, 精度, 的精度, 我们的精度, 我们的精度, 的精度, 的精度, 的精度, 的精度, 的精度,