Gradient-based approaches in reinforcement learning (RL) have achieved tremendous success in learning policies for autonomous vehicles. While the performance of these approaches warrants real-world adoption, these policies lack interpretability, limiting deployability in the safety-critical and legally-regulated domain of autonomous driving (AD). AD requires interpretable and verifiable control policies that maintain high performance. We propose Interpretable Continuous Control Trees (ICCTs), a tree-based model that can be optimized via modern, gradient-based, RL approaches to produce high-performing, interpretable policies. The key to our approach is a procedure for allowing direct optimization in a sparse decision-tree-like representation. We validate ICCTs against baselines across six domains, showing that ICCTs are capable of learning interpretable policy representations that parity or outperform baselines by up to 33% in AD scenarios while achieving a 300x-600x reduction in the number of policy parameters against deep learning baselines. Furthermore, we demonstrate the interpretability and utility of our ICCTs through a 14-car physical robot demonstration.
翻译:强化学习的渐进式方法(RL)在学习自主车辆政策方面取得了巨大成功。虽然这些方法的实施需要现实世界采用,但这些政策缺乏可解释性,限制了自主驾驶安全关键和法律管制领域的可部署性。AD要求有可解释和可核查的控制政策,以保持高性能。我们提议了可解释的连续控制树(ICCTs),这是一种基于树的模型,可以通过现代的、梯度的、可解释的RL方法优化,以产生高性能和可解释的政策。我们的方法的关键是允许在稀有的决策-树类代表中直接优化程序。我们根据六个领域的基线验证了国际电算技术中心,表明国际电算技术中心能够学习可解释的政策代表,在应用应用的假设中,其均匀性或优于基线,最高达33%,同时在深层学习基线中实现300x600x政策参数的减少。此外,我们通过14轮物理机器人演示,展示了国际电算技术的可解释性和实用性。