One of the most prominent attributes of Neural Networks (NNs) constitutes their capability of learning to extract robust and descriptive features from high dimensional data, like images. Hence, such an ability renders their exploitation as feature extractors particularly frequent in an abundant of modern reasoning systems. Their application scope mainly includes complex cascade tasks, like multi-modal recognition and deep Reinforcement Learning (RL). However, NNs induce implicit biases that are difficult to avoid or to deal with and are not met in traditional image descriptors. Moreover, the lack of knowledge for describing the intra-layer properties -- and thus their general behavior -- restricts the further applicability of the extracted features. With the paper at hand, a novel way of visualizing and understanding the vector space before the NNs' output layer is presented, aiming to enlighten the deep feature vectors' properties under classification tasks. Main attention is paid to the nature of overfitting in the feature space and its adverse effect on further exploitation. We present the findings that can be derived from our model's formulation, and we evaluate them on realistic recognition scenarios, proving its prominence by improving the obtained results.
翻译:神经网络的最突出特征之一是它们能够学习从高维数据(如图像)中提取稳健和描述性特征,例如图像。因此,这种能力使得它们作为地物提取器的利用在大量现代推理系统中特别频繁,其应用范围主要包括复杂的级联任务,如多式识别和深度强化学习(RL)。然而,NNP产生难以避免或处理的隐含偏见,而且传统图像描述器没有满足这些偏见。此外,缺乏描述层内特性的知识 -- -- 因而也缺乏一般行为 -- -- 限制了所提取的特性的进一步适用性。在纸上展示了一种新颖的方法,在NNN的输出层之前对矢量空间进行可视化和理解,目的是启发分类任务下的深层特性矢量特性。主要注意地物空间的过度适应性质及其对进一步开发的不利影响。我们介绍了从我们模型的拟订中得出的研究结果,我们通过改进所获得的结果来评价这些结果,从而证明其显著性。