This paper aims at developing a faster and a more accurate solution to the amodal 3D object detection problem for indoor scenes. It is achieved through a novel neural network that takes a pair of RGB-D images as the input and delivers oriented 3D bounding boxes as the output. The network, named 3D-SSD, composed of two parts: hierarchical feature fusion and multi-layer prediction. The hierarchical feature fusion combines appearance and geometric features from RGB-D images while the multi-layer prediction utilizes multi-scale features for object detection. As a result, the network can exploit 2.5D representations in a synergetic way to improve the accuracy and efficiency. The issue of object sizes is addressed by attaching a set of 3D anchor boxes with varying sizes to every location of the prediction layers. At the end stage, the category scores for 3D anchor boxes are generated with adjusted positions, sizes and orientations respectively, leading to the final detections using non-maximum suppression. In the training phase, the positive samples are identified with the aid of 2D ground truth to avoid the noisy estimation of depth from raw data, which guide to a better converged model. Experiments performed on the challenging SUN RGB-D dataset show that our algorithm outperforms the state-of-the-art Deep Sliding Shape by 10.2% mAP and 88x faster. Further, experiments also suggest our approach achieves comparable accuracy and is 386x faster than the state-of-art method on the NYUv2 dataset even with a smaller input image size.
翻译:本文旨在为室内场景开发一种更快、更准确的3D模式对象探测问题解决方案。 它通过新型神经网络实现。 新的神经网络将一对 RGB- D 图像作为输入, 并提供面向3D 的框框作为输出。 名为 3D- SSD 的网络由两部分组成: 等级特征聚合和多层预测。 等级特性结合了 RGB- D 图像的外观和几何特征, 而多层预测则使用多尺度的特性来探测物体。 因此, 网络可以以协同方式利用 2.5D 表示来提高精确度和效率。 对象大小问题通过在预测层的每个位置上安装一套大小不等的 3D 锁定框框来加以解决。 在最后阶段, 3D 锁定盒的分类分分别与调整位置、 大小和方向相结合, 导致使用非最优化的抑制手段进行最终检测。 在培训阶段, 肯定的样本与2D 地面真相的帮助, 避免从原始数据深度得出精确度的准确度。 解决对象大小问题的问题, 3D 问题, 问题的问题是通过将三D 锁定的锁定的锁定的定位框框框框框框框框框框框框框框框框框框框框框, 显示我们的数据显示S- 。 该模型显示S- 将更精确度显示S- 的精确度显示S- 的精确度显示S- 的精确度显示S- 的精确度显示S- 。