Mobile telepresence robots (MTRs) allow people to navigate and interact with a remote environment that is in a place other than the person's true location. Thanks to the recent advances in 360 degree vision, many MTRs are now equipped with an all-degree visual perception capability. However, people's visual field horizontally spans only about 120 degree of the visual field captured by the robot. To bridge this observability gap toward human-MTR shared autonomy, we have developed a framework, called GHAL360, to enable the MTR to learn a goal-oriented policy from reinforcements for guiding human attention using visual indicators. Three telepresence environments were constructed using datasets that are extracted from Matterport3D and collected from a real robot respectively. Experimental results show that GHAL360 outperformed the baselines from the literature in the efficiency of a human-MTR team completing target search tasks.
翻译:移动远程定位机器人(MTRs)允许人们在距离真实位置以外的地方航行和与远程环境互动。由于360度视野的最近进步,许多中期审查现在都配备了全度视觉感知能力。然而,人们的视觉场水平仅覆盖机器人所捕捉到的视觉场面积的120度左右。为了缩小人类与MTR共享自主性之间的观测差距,我们开发了一个框架,称为GHAL360,使中期审查能够从加强中学习一个目标导向政策,用视觉指标引导人类注意。三个远程定位环境是使用分别从MTealport3D提取的和从真正的机器人收集的数据集建造的。实验结果表明,GHAL360在完成目标搜索任务的人类-MTR团队的效率方面超越了文献的基线。