Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify these dependencies is termed as feed-forward inference. Such inference mechanisms are justified based on classical cause-to-effect inductive reasoning models. Inductive reasoning based feed-forward inference is widely used due to its mathematical simplicity and operational ease. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effect-to-cause inference model that reasons abductively. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing reasoning mechanism as contrastive reasoning. In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast. We demonstrate the value of contrastive reasoning in two stages of a neural network's reasoning pipeline : in inferring and visually explaining decisions for the application of object recognition. We illustrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47%, 2.56%, and 5.48% in average accuracy under the proposed contrastive framework on CIFAR-10C, noisy STL-10, and VisDA datasets respectively.
翻译:神经网络代表了高维方位中经过训练的重量的预测数据。 受过训练的重量作为由因果等级依附组成的知识基础。 基于识别这些依附的特征的推论被称为饲料向前推推推力。 这种推论机制基于传统的因果因果推理模型是合理的。 基于进进进向向推论的推论由于数学简单和易于操作而广泛使用。 然而, 进料向前推论模型并没有很好地概括到未经训练的情况。 为了减轻这种普遍化的挑战,我们建议使用一种因果至因推论模型,说明有倾向性的原因。 这里,这些特征代表了现有依附性特征的变化被称作某种效果的进化推论。 我们把这个变化和随后的推理机制称为对比推理模型。 在本文中,我们将对比推理结构结构结构的结构加以正规化,并提议一种方法来提取神经网络的对比概念。 我们展示了神经网络两个阶段的对比推论价值: 推断和直观地解释关于应用目标扭曲度分析框架的判断, 5- 48 下,我们用对比性推论的数值, 对比性框架之下, 我们用对比性推论来说明在平均方向下, 对比性分析分析分析图图表分析分析图表下的价值。