A dynamic autonomy allocation framework automatically shifts how much control lies with the human versus the robotics autonomy, for example based on factors such as environmental safety or user preference. To investigate the question of which factors should drive dynamic autonomy allocation, we perform a human subject study to collect ground truth data that shifts between levels of autonomy during shared-control robot operation. Information streams from the human, the interaction between the human and the robot, and the environment are analyzed. Machine learning methods -- both classical and deep learning -- are trained on this data. An analysis of information streams from the human-robot team suggests features which capture the interaction between the human and the robotics autonomy are the most informative in predicting when to shift autonomy levels. Even the addition of data from the environment does little to improve upon this predictive power. The features learned by deep networks, in comparison to the hand-engineered features, prove variable in their ability to represent shift-relevant information. This work demonstrates the classification power of human-only and human-robot interaction information streams for use in the design of shared-control frameworks, and provides insights into the comparative utility of various data streams and methods to extract shift-relevant information from those data.
翻译:动态自主分配框架自动改变人类相对于机器人自主的控制程度,例如基于环境安全或用户偏好等因素。为了调查哪些因素应驱动动态自主分配的问题,我们进行了一项人类主题研究,以收集在共享控制机器人操作期间自主水平之间变化的地面真相数据;分析来自人类的信息流、人类与机器人之间的互动以及环境;根据这些数据对机械学习方法 -- -- 传统和深层学习 -- -- 进行了培训;对人类机器人团队的信息流进行分析,表明获取人类与机器人自主之间的相互作用的特征是预测何时改变自主水平的最丰富信息;即使环境数据的增加也无助于改进这种预测能力;与人工设计特征相比,深网络所学的特征证明它们代表与转换有关的信息的能力存在差异;这项工作展示了用于设计共享控制框架的仅人与人类机器人互动信息流的分类能力,并提供了对各种数据流和数据转换方法的相对效用的洞察力。