We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for real-world architectures, such as pre-trained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of under-sampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks but enjoys better computational properties for heavily over-parametrized models, which makes it possible to apply it to real-world networks.
翻译:我们定义了单个样本为神经网络培训提供的信息概念,我们专门用它来衡量样本能给最终重量带来多少信息,以及它能给根据重量计算的函数带来多少信息。虽然我们相关,但我们表明这些数量在质量上有所不同。我们使用线性网络对这些数量提供有效的近似值,并用经验来证明近似值对于诸如预先培训的ResNet等现实世界结构是准确的。我们将这些措施应用于一些问题,例如数据集汇总、分析抽样不足的类别、比较不同数据来源的信息性以及发现对抗性和腐蚀性的例子。我们的工作概括了现有框架,但对于严重过度平衡的模型,我们拥有更好的计算特性,因此有可能将其应用于现实世界网络。