To deploy and operate deep neural models in production, the quality of their predictions, which might be contaminated benignly or manipulated maliciously by input distributional deviations, must be monitored and assessed. Specifically, we study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data, with the aim of detecting input distributional deviations over which the quality of the network's predictions is potentially damaged. Using selective prediction principles, we propose a distribution deviation detection method for DNNs. The proposed method is derived from a tight coverage generalization bound computed over a sample of instances drawn from the true underlying distribution. Based on this bound, our detector continuously monitors the operation of the network over a test window and fires off an alarm whenever a deviation is detected. This novel detection method consistently and significantly outperforms the state of the art with respect to the CIFAR-10 and ImageNet datasets, thus establishing a new performance bar for this task, while being substantially more efficient in time and space complexities.
翻译:为了在生产过程中部署和操作深层神经模型,必须监测和评估其预测的质量,这些预测可能受到输入分布偏差的善意污染或恶意操纵。具体地说,我们研究监测接收数据流的深神经网络(DNN)的健康运行情况,目的是发现输入分布偏差,而网络预测的质量可能因此受到损害。我们采用选择性预测原则,为DNN提出一种分布偏差探测方法。拟议方法来自从真实基本分布中抽取的样本的严格覆盖一般化计算。基于这一约束,我们的探测器不断监测网络在测试窗口的运行情况,并在发现偏差时发出警报。这种新颖的探测方法一贯且明显地超越了与CIFAR-10和图像网络数据集有关的艺术状态,从而为这项任务建立了一个新的性能障碍,同时在时间和空间复杂性方面效率要大大提高。</s>