Estimating a scene's depth to achieve collision avoidance against moving pedestrians is a crucial and fundamental problem in the robotic field. This paper proposes a novel, low complexity network architecture for fast and accurate human depth estimation and segmentation in indoor environments, aiming to applications for resource-constrained platforms (including battery-powered aerial, micro-aerial, and ground vehicles) with a monocular camera being the primary perception module. Following the encoder-decoder structure, the proposed framework consists of two branches, one for depth prediction and another for semantic segmentation. Moreover, network structure optimization is employed to improve its forward inference speed. Exhaustive experiments on three self-generated datasets prove our pipeline's capability to execute in real-time, achieving higher frame rates than contemporary state-of-the-art frameworks (114.6 frames per second on an NVIDIA Jetson Nano GPU with TensorRT) while maintaining comparable accuracy.
翻译:估计场景的深度,以避免移动行人碰撞,这是机器人领域一个关键和根本的问题。本文件提议建立一个新的、低复杂性的网络架构,用于室内环境中快速和准确的人类深度估计和分解,目的是应用资源受限制的平台(包括电池驱动的航空、微型航空和地面车辆),以单筒照相机作为主要感知模块。在编码器脱coder结构之后,拟议框架由两个分支组成,一个用于深度预测,另一个用于语义分解。此外,网络结构优化用于提高其前推速度。三个自产数据集的耗竭实验证明了我们管道实时执行的能力,在保持可比准确性的同时,实现高于当代最新技术框架(NesorRT的NEVIDA Jetson Nano GPUP)每秒114.6框架。