We present a system for 3D semantic scene perception consisting of a network of distributed smart edge sensors. The sensor nodes are based on an embedded CNN inference accelerator and RGB-D and thermal cameras. Efficient vision CNN models for object detection, semantic segmentation, and human pose estimation run on-device in real time. 2D human keypoint estimations, augmented with the RGB-D depth estimate, as well as semantically annotated point clouds are streamed from the sensors to a central backend, where multiple viewpoints are fused into an allocentric 3D semantic scene model. As the image interpretation is computed locally, only semantic information is sent over the network. The raw images remain on the sensor boards, significantly reducing the required bandwidth, and mitigating privacy risks for the observed persons. We evaluate the proposed system in challenging real-world multi-person scenes in our lab. The proposed perception system provides a complete scene view containing semantically annotated 3D geometry and estimates 3D poses of multiple persons in real time.
翻译:我们提出了一个3D语义场景感知系统,由分布式智能边缘传感器组成的网络组成。传感器节点基于嵌入式CNN的感应加速器和 RGB-D 和热摄像头。有节能的CNN物体探测、语义分解和人体表面估计的有效视觉模型实时运行。 2D 人类关键点估计,加上RGB-D深度估计值,以及语义上附加注释的点云从传感器流到中央后端,多角度被结合成一个直角 3D 语义场景模型。随着图像解释的本地计算,仅将语义信息发送到网络上。原始图像留在传感器板上,大大降低了所需的带宽,减轻了被观察者的隐私风险。我们在对实验室中真实世界多人场景提出挑战时,我们评估了拟议的系统。拟议的感知系统提供了一个完整的场景视图,其中包含以语义为3D 3D 参数和估计实时多个人的3D 。