We study sequential decision-making with known rewards and unknown constraints, motivated by situations where the constraints represent expensive-to-evaluate human preferences, such as safe and comfortable driving behavior. We formalize the challenge of interactively learning about these constraints as a novel linear bandit problem which we call constrained linear best-arm identification. To solve this problem, we propose the Adaptive Constraint Learning (ACOL) algorithm. We provide an instance-dependent lower bound for constrained linear best-arm identification and show that ACOL's sample complexity matches the lower bound in the worst-case. In the average case, ACOL's sample complexity bound is still significantly tighter than bounds of simpler approaches. In synthetic experiments, ACOL performs on par with an oracle solution and outperforms a range of baselines. As an application, we consider learning constraints to represent human preferences in a driving simulation. ACOL is significantly more sample efficient than alternatives for this application. Further, we find that learning preferences as constraints is more robust to changes in the driving scenario than encoding the preferences directly in the reward function.
翻译:我们用已知的奖赏和未知的限制来研究顺序决策,其动机是制约因素代表着昂贵的人类偏好,例如安全和舒适的驾驶行为。我们将这些制约因素的交互学习挑战正式确定为一个新颖的线性土匪问题,我们称之为限制的线性最佳武器识别。为了解决这个问题,我们建议采用适应性约束学习算法。我们为受限制的线性最佳武器识别提供了一种以实例为基础的较低约束,并表明ACOL的样本复杂性与最坏情况下的较低约束相匹配。在平均情况下,ACOL的样本复杂性约束仍然比较简单方法的界限要紧得多。在合成实验中,ACOL与甲骨牌解决方案持平,并超越了一系列基线。作为一种应用,我们考虑学习限制在驱动模拟中代表人的偏好。ACOL比其他应用的样本效率要高得多。此外,我们发现学习偏好作为制约比直接将奖励功能中的偏好归在一起更能改变驱动设想。