Human-AI shared control allows human to interact and collaborate with AI to accomplish control tasks in complex environments. Previous Reinforcement Learning (RL) methods attempt the goal-conditioned design to achieve human-controllable policies at the cost of redesigning the reward function and training paradigm. Inspired by the neuroscience approach to investigate the motor cortex in primates, we develop a simple yet effective frequency-based approach called \textit{Policy Dissection} to align the intermediate representation of the learned neural controller with the kinematic attributes of the agent behavior. Without modifying the neural controller or retraining the model, the proposed approach can convert a given RL-trained policy into a human-interactive policy. We evaluate the proposed approach on the RL tasks of autonomous driving and locomotion. The experiments show that human-AI shared control achieved by Policy Dissection in driving task can substantially improve the performance and safety in unseen traffic scenes. With human in the loop, the locomotion robots also exhibit versatile controllable motion skills even though they are only trained to move forward. Our results suggest the promising direction of implementing human-AI shared autonomy through interpreting the learned representation of the autonomous agents. Demo video and code will be made available at https://metadriverse.github.io/policydissect.
翻译:人类-AI 共享控制使人类能够与AI互动并合作,在复杂环境中完成控制任务。以前的加强学习方法(RL)试图以重新设计奖励功能和培训模式为代价,将目标设计设计转化为人控制政策,实现人控制政策。在神经科学方法的启发下,我们调查灵长类运动皮层,我们开发了一个简单而有效的基于频率的方法,称为\textit{政策分解},使学习的神经控制器的中间代表与代理人行为的动态属性相一致。在不修改神经控制器或再培训模型的情况下,拟议的方法可以将特定受RL培训的政策转化为人际互动政策。我们评估了RL自主驾驶和运动模式任务的拟议方法。实验表明,政策分解在驾驶任务中实现的人类-AI共同控制可以大大改善隐蔽交通场的性和安全性。在循环中,移动机器人也展示了可控的移动运动技能,即使他们只受过前行训练。我们的结果表明,通过解释可操作的自动代理机构/Dimaltimetal 代码,在可理解的自动代理机构中实施人类-AI共同自治。