Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand after models have been deployed. The behaviour of deep neural networks is undefined for so called out-of-distribution examples. That is, examples from another distribution than the training set. Several methodologies to detect out-of-distribution examples during prediction-time have been proposed, but these methodologies constrain either neural network architecture, how the neural network is trained, suffer from performance overhead, or assume that the nature of out-of-distribution examples are known a priori. We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time. By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method. DIME allows us to add prediction-time detection of out-of-distribution examples to neural network models without altering architecture or training while imposing minimal constraints on when it is applicable. In our experiments, we demonstrate that by using DIME as an add-on after training, we efficiently detect out-of-distribution examples during prediction and match state-of-the-art methods while being more versatile and introducing negligible computational overhead.
翻译:采用安全临界系统中的深层学习方法,使人们有必要了解在模型部署后深神经网络不理解的深神经网络所理解的内容。深神经网络的行为对于所谓的分配外实例没有定义。也就是说,与培训组相比,其他分布方式的实例。提出了预测时发现分配外实例的几种方法,但这些方法限制了神经网络结构、神经网络如何培训、神经网络如何得到性能管理、或假定分配外实例的性质是事先已知的。我们介绍了在预测期间我们用来探测分配外的模型(DIME)的距离。我们通过将培训设置成功能空间作为线性高平面,我们得出一种简单、不统一、高性能和计算效率的方法。DIME允许我们增加预测时间对神经网络模型的分配外示例的探测,而不改变结构或培训,同时对适用时施加最低限度的限制。我们在实验中证明,在采用DIME时,将DIME作为升级的附加模型,同时在升级后采用可计量的升级方法,我们在进行高效的计算过程中,在进行升级后,我们又检测。