Many neural network-based out-of-distribution (OoD) detection methods have been proposed. However, they require many training data for each target task. We propose a simple yet effective meta-learning method to detect OoD with small in-distribution data in a target task. With the proposed method, the OoD detection is performed by density estimation in a latent space. A neural network shared among all tasks is used to flexibly map instances in the original space to the latent space. The neural network is meta-learned such that the expected OoD detection performance is improved by using various tasks that are different from the target tasks. This meta-learning procedure enables us to obtain appropriate representations in the latent space for OoD detection. For density estimation, we use a Gaussian mixture model (GMM) with full covariance for each class. We can adapt the GMM parameters to in-distribution data in each task in a closed form by maximizing the likelihood. Since the closed form solution is differentiable, we can meta-learn the neural network efficiently with a stochastic gradient descent method by incorporating the solution into the meta-learning objective function. In experiments using six datasets, we demonstrate that the proposed method achieves better performance than existing meta-learning and OoD detection methods.
翻译:提出了许多基于神经网络的外分配(OoD)探测方法,但需要为每个目标任务提供许多培训数据。我们建议了一个简单而有效的元学习方法,在目标任务中用小分布中的数据探测OOD。根据建议的方法,OOD探测是通过在潜伏空间的密度估计进行。所有任务之间共享的神经网络都用来灵活地将原始空间中的情况映射到潜伏空间。神经网络是元学的,通过使用与目标任务不同的不同的任务,预期OOOD探测性能得到改进。这个元学习程序使我们能够在OOD探测的潜在空间中获得适当的表示。对于密度估计,我们使用一个带有每个舱级完全共差的Gausian混合模型(GMMM)进行检测。我们可以将GMM参数调整为封闭式的每项任务中以封闭形式绘制的数据,最大限度地增加可能性。由于封闭形式的解决办法是不同的,我们可以将神经网络的元流脱色梯下降方法有效地加以改进。在元学习目标函数中,我们用更好的方法将解决方案纳入元学习方法,我们用6个数据实验显示更好的方法。