Semantic scene understanding is essential for mobile agents acting in various environments. Although semantic segmentation already provides a lot of information, details about individual objects as well as the general scene are missing but required for many real-world applications. However, solving multiple tasks separately is expensive and cannot be accomplished in real time given limited computing and battery capabilities on a mobile platform. In this paper, we propose an efficient multi-task approach for RGB-D scene analysis~(EMSANet) that simultaneously performs semantic and instance segmentation~(panoptic segmentation), instance orientation estimation, and scene classification. We show that all tasks can be accomplished using a single neural network in real time on a mobile platform without diminishing performance - by contrast, the individual tasks are able to benefit from each other. In order to evaluate our multi-task approach, we extend the annotations of the common RGB-D indoor datasets NYUv2 and SUNRGB-D for instance segmentation and orientation estimation. To the best of our knowledge, we are the first to provide results in such a comprehensive multi-task setting for indoor scene analysis on NYUv2 and SUNRGB-D.
翻译:虽然语义分割法已经提供了大量信息,但对于许多现实世界应用来说,单项物体和一般场景的细节却缺乏,而对于许多现实世界应用来说,则需要如此。然而,单独解决多重任务是昂贵的,由于移动平台上的计算和电池能力有限,无法实时完成。在本文中,我们建议对RGB-D现场分析~(EMSANet)采取高效的多任务方法,同时进行语义和实例分割法~(广视分割法)、实例定向估计和场景分类。我们表明,所有任务都可以在移动平台上实时使用单一神经网络完成,而不降低性能,相比之下,单个任务能够相互受益。为了评估我们的多任务方法,我们扩展了通用 RGB-D 室内数据集NYUv2 和 SURGB-D 通用说明,以便进行分解和定向估计。据我们所知,我们首先在这样一个综合的多任务设置中提供了结果,用于对NYUV2 和 SUR-D进行室内场景分析。