An evaluation criterion for safe and trustworthy deep learning is how well the invariances captured by representations of deep neural networks (DNNs) are shared with humans. We identify challenges in measuring these invariances. Prior works used gradient-based methods to generate \textit{identically represented inputs} (IRIs), \ie, inputs which have identical representations (on a given layer) of a neural network, and thus capture invariances of a given network. One necessary criterion for a network's invariances to align with human perception is for its IRIs look `similar` to humans. Prior works, however, have mixed takeaways; some argue that later layers of DNNs do not learn human-like invariances (\cite{jenelle2019metamers}) yet others seem to indicate otherwise (\cite{mahendran2014understanding}). We argue that the loss function used to generate IRIs can heavily affect takeaways about invariances of the network and is the primary reason for these conflicting findings. We propose an \textit{adversarial} regularizer on the IRI generation loss that finds IRIs that make any model appear to have very little shared invariance with humans. Based on this evidence, we argue that there is scope for improving models to have human-like invariances, and further, to have meaningful comparisons between models one should use IRIs generated using the \textit{regularizer-free} loss. We then conduct an in-depth investigation of how different components (\eg~architectures, training losses, data augmentations) of the deep learning pipeline contribute to learning models that have good alignment with humans. We find that architectures with residual connections trained using a (self-supervised) contrastive loss with $\ell_p$ ball adversarial data augmentation tend to learn invariances that are most aligned with humans.
翻译:安全和值得信赖的深层学习的评价标准是,深神经网络的表达方式(DNNs)所捕捉到的不一致性与人类共享的程度如何。 我们发现测量这些不一致性方面的挑战。 先前的作品使用梯度方法生成神经网络( 在给定的一层上) 的类似表达方式, 从而捕捉给定网络的不一致性。 一个网络与人类感知一致的自由差异的一个必要标准是其IRIs看上去“ 与人类相似 ” 。 但是, 先前的作品在测量这些不一致性方面存在挑战 。 一些争论说, 以后的 DNNations使用梯度方法生成不易变异性 (\ cite{jenelle2019materms}) (IRI), 而其他一些投入似乎表示不相同 (cite{mahendrandann) 的表达方式, 从而捕捉到一个与 IRIsrediversity 的偏差, 以及这些自相矛盾的对比的主要原因。 我们提议, 使用最接近的服务器的排序 数据在生成模型中发现, 人类的排序中, 我在学习中发现一个小小的学习模式中发现, 人类变变异的模型中, 我在生成中发现一个小的排序中发现一个比。