Collaborative multi-robot perception provides multiple views of an environment, offering varying perspectives to collaboratively understand the environment even when individual robots have poor points of view or when occlusions are caused by obstacles. These multiple observations must be intelligently fused for accurate recognition, and relevant observations need to be selected in order to allow unnecessary robots to continue on to observe other targets. This research problem has not been well studied in the literature yet. In this paper, we propose a novel approach to collaborative multi-robot perception that simultaneously integrates view selection, feature selection, and object recognition into a unified regularized optimization formulation, which uses sparsity-inducing norms to identify the robots with the most representative views and the modalities with the most discriminative features. As our optimization formulation is hard to solve due to the introduced non-smooth norms, we implement a new iterative optimization algorithm, which is guaranteed to converge to the optimal solution. We evaluate our approach through a case-study in simulation and on a physical multi-robot system. Experimental results demonstrate that our approach enables effective collaborative perception through accurate object recognition and effective view and feature selection.
翻译:合作型多机器人概念提供了多种环境观点,提供了不同的视角,以协作理解环境,即使个体机器人观点较差或障碍造成隔离,这些多重观测必须智能地结合,以便准确认识,需要选择相关观测,以便不必要的机器人能够继续观察其它目标。文献中尚未很好研究这一研究问题。在本文件中,我们提议了一种新型的多机器人协作观点方法,将视图选择、特征选择和对象识别纳入统一的常规优化配方,该配方使用快速引导规范,用最具代表性的观点和模式来识别机器人,具有最有歧视的特点。由于我们采用的非模拟规范,我们很难解决优化型的配方,因此我们实施新的迭代优化算法,保证与最佳解决方案趋同。我们通过模拟和物理多机器人系统的案例研究来评估我们的方法。实验结果表明,我们的方法通过精确的物体识别、有效视图和特征选择,使得有效的协作认知得以实现。