Humans have perfected the art of learning from multiple modalities through sensory organs. Despite their impressive predictive performance on a single modality, neural networks cannot reach human level accuracy with respect to multiple modalities. This is a particularly challenging task due to variations in the structure of respective modalities. Conditional Batch Normalization (CBN) is a popular method that was proposed to learn contextual features to aid deep learning tasks. This technique uses auxiliary data to improve representational power by learning affine transformations for convolutional neural networks. Despite the boost in performance observed by using CBN layers, our work reveals that the visual features learned by introducing auxiliary data via CBN deteriorates. We perform comprehensive experiments to evaluate the brittleness of CBN networks to various datasets, suggesting that learning from visual features alone could often be superior for generalization. We evaluate CBN models on natural images for bird classification and histology images for cancer type classification. We observe that the CBN network learns close to no visual features on the bird classification dataset and partial visual features on the histology dataset. Our extensive experiments reveal that CBN may encourage shortcut learning between the auxiliary data and labels.
翻译:人类通过感官器官完善了从多种模式中学习的艺术。 尽管神经网络在单一模式中的表现令人印象深刻,但是在多种模式中无法达到人类水平的准确性。 由于不同模式的结构不同,这是一个特别艰巨的任务。 条件批量正常化(CBN)是一种受欢迎的方法,旨在学习背景特征,以帮助深层学习任务。 这一技术利用辅助数据来通过学习卷轴神经网络的松动变形来提高代表力。 尽管通过使用 CBN 层观测到的性能提升,但我们的工作显示,通过 CBN 引入辅助数据所学到的视觉特征正在恶化。 我们进行了全面的实验,以评价CBN 网络在各种数据集中的微弱性,这表明光学光学特性往往优于一般化。 我们评估了鸟类分类的自然图像的CBN模型和癌症类型分类的神学图像。我们观察到,CBN网络在鸟类分类数据集上没有视觉特征,而在组织数据集中也存在部分视觉特征。我们的广泛实验表明,CBNBN可能鼓励在辅助数据与标签之间进行快捷学习。