Recognizing objects and scenes are two challenging but essential tasks in image understanding. In particular, the use of RGB-D sensors in handling these tasks has emerged as an important area of focus for better visual understanding. Meanwhile, deep neural networks, specifically convolutional neural networks (CNNs), have become widespread and have been applied to many visual tasks by replacing hand-crafted features with effective deep features. However, it is an open problem how to exploit deep features from a multi-layer CNN model effectively. In this paper, we propose a novel two-stage framework that extracts discriminative feature representations from multi-modal RGB-D images for object and scene recognition tasks. In the first stage, a pretrained CNN model has been employed as a backbone to extract visual features at multiple levels. The second stage maps these features into high level representations with a fully randomized structure of recursive neural networks (RNNs) efficiently. To cope with the high dimensionality of CNN activations, a random weighted pooling scheme has been proposed by extending the idea of randomness in RNNs. Multi-modal fusion has been performed through a soft voting approach by computing weights based on individual recognition confidences (i.e. SVM scores) of RGB and depth streams separately. This produces consistent class label estimation in final RGB-D classification performance. Extensive experiments verify that fully randomized structure in RNN stage encodes CNN activations to discriminative solid features successfully. Comparative experimental results on the popular Washington RGB-D Object and SUN RGB-D Scene datasets show that the proposed approach achieves superior or on-par performance compared to state-of-the-art methods both in object and scene recognition tasks. Code is available at https://github.com/acaglayan/CNN_randRNN.
翻译:认识对象和场景是了解图像的两种具有挑战性但至关重要的任务。 特别是,使用 RGB- D 传感器处理这些任务已成为一个重要焦点领域,以便更好地了解视觉理解。 与此同时,深神经网络,特别是卷态神经网络,已经变得广泛,通过以有效深度特征取代手工制作的神经网络结构,应用到许多视觉任务中。然而,如何有效利用多层CNN 模式的深层特征是一个公开的问题。在本文中,我们提出了一个新的两阶段框架,从多模版 RGB- D 图像中提取有区别性特征的表达方式,供对象和场景识别任务使用。 在第一阶段,预先训练的CNNM 模型被用作在多个级别上提取视觉特征的骨干。 第二阶段将这些特征绘制为高层次的图像,以完全随机结构的CNNNF 启动。 多式CN- D 通过软性级的 RGB- R- Rcalal- Rcalal- calalalalal- colorization 方法, 以连续的SQ- RGB-ral-ral-ralal-cal- oral- comalalalal- laisal laisal laisal- lavial lavial- laisal- sal laisal laisal laisal- labal- laisal labal- labal labal- sal- sal- sal- sal- labal labal labal lad- lad- labal labal lad- lad- sal lad- sal lad- sal- sal- sal- lad- lad- labal labal labal labal labal labal labal labal labal labal labal labal labal labal labal labal labal lad labal labal labal labal labal labal labal lad-cal labal a