Current technological advances open up new opportunities for bringing human-machine interaction to a new level of human-centered cooperation. In this context, a key issue is the semantic understanding of the environment in order to enable mobile robots more complex interactions and a facilitated communication with humans. Prerequisites are the vision-based registration of semantic objects and humans, where the latter are further analyzed for potential interaction partners. Despite significant research achievements, the reliable and fast registration of semantic information still remains a challenging task for mobile robots in real-world scenarios. In this paper, we present a vision-based system for mobile assistive robots to enable a semantic-aware environment perception without additional a-priori knowledge. We deploy our system on a mobile humanoid robot that enables us to test our methods in real-world applications.
翻译:目前技术进步为将人体机器互动提高到以人为中心的合作的新水平开辟了新的机会,在这方面,关键问题是对环境的语义理解,以便使移动机器人能够进行更复杂的互动,并便利与人类的交流。先决条件是,对语义物体和人类进行基于愿景的登记,进一步分析这些物体和人类的潜在互动伙伴。尽管取得了重大研究成就,但语义信息的可靠和快速登记仍然是现实世界情景下移动机器人的一项艰巨任务。在本文中,我们提出了一个基于愿景的移动辅助机器人系统,以便在没有额外优先知识的情况下,使对语义学环境的认识成为现实世界应用中的一种工具。我们将我们的系统安装在移动人类机器人上,使我们能够在现实世界应用中测试我们的方法。