Imitation learning (IL) is a general learning paradigm for tackling sequential decision-making problems. Interactive imitation learning, where learners can interactively query for expert demonstrations, has been shown to achieve provably superior sample efficiency guarantees compared with its offline counterpart or reinforcement learning. In this work, we study classification-based online imitation learning (abbrev. $\textbf{COIL}$) and the fundamental feasibility to design oracle-efficient regret-minimization algorithms in this setting, with a focus on the general nonrealizable case. We make the following contributions: (1) we show that in the $\textbf{COIL}$ problem, any proper online learning algorithm cannot guarantee a sublinear regret in general; (2) we propose $\textbf{Logger}$, an improper online learning algorithmic framework, that reduces $\textbf{COIL}$ to online linear optimization, by utilizing a new definition of mixed policy class; (3) we design two oracle-efficient algorithms within the $\textbf{Logger}$ framework that enjoy different sample and interaction round complexity tradeoffs, and conduct finite-sample analyses to show their improvements over naive behavior cloning; (4) we show that under the standard complexity-theoretic assumptions, efficient dynamic regret minimization is infeasible in the $\textbf{Logger}$ framework. Our work puts classification-based online imitation learning, an important IL setup, into a firmer foundation.
翻译:光学学习( IL) 是处理连续决策问题的一般学习范式 。 交互式模拟学习( 学习者可以互动查询专家演示, 互动模拟学习) 已证明与离线对应或强化学习相比,能够实现优优优的样本效率保障。 在这项工作中, 我们研究基于分类的在线模拟学习( abrev. $\ textbf{COIL}$), 以及在这个环境中设计高压、 高效的遗憾最小化算法的基本可行性, 重点是一般无法实现的情况 。 我们做出以下贡献:(1) 在 $\ textbf{ COIL} 问题中, 我们显示任何适当的在线学习算法无法保证总体的亚线性遗憾; (2) 我们提出 $\ textbf{ { COIL}, 一个不适当的在线学习算法框架, 将美元降低为在线线性优化, 并使用新的政策类定义 。 (3) 我们在 $\ textf{ Loggger} 框架内设计两种或以最高级算法为基础的算法。, 在我们的标准模型中, 显示我们的标准模型中, 显示一个定的僵化假设 。