Modern end-to-end learning systems can learn to explicitly infer control from perception. However, it is difficult to guarantee stability and robustness for these systems since they are often exposed to unstructured, high-dimensional, and complex observation spaces (e.g., autonomous driving from a stream of pixel inputs). We propose to leverage control Lyapunov functions (CLFs) to equip end-to-end vision-based policies with stability properties and introduce stability attention in CLFs (att-CLFs) to tackle environmental changes and improve learning flexibility. We also present an uncertainty propagation technique that is tightly integrated into att-CLFs. We demonstrate the effectiveness of att-CLFs via comparison with classical CLFs, model predictive control, and vanilla end-to-end learning in a photo-realistic simulator and on a real full-scale autonomous vehicle.
翻译:现代端到端学习系统可以从感知中显式地推断控制。然而,由于经常暴露于无结构、高维度和复杂的观察空间(例如,从像素输入流中自主驾驶),很难保证这些系统的稳定性和鲁棒性。我们提出利用控制李亚普诺夫函数(CLFs)为端到端视觉策略提供稳定性属性,并引入CLFs中的稳定性关注(att-CLFs)来应对环境变化并提高学习灵活性。我们还提出了一种紧密集成到att-CLFs中的不确定性传播技术。我们通过在真实感模拟器中和在实际的全尺寸自主驾驶汽车上与经典CLFs、模型预测控制和基本端到端学习进行比较,证明了att-CLFs的有效性。