This paper proposes a deep representation learning using an information-theoretic loss with an aim to increase the inter-class distances as well as within-class similarity in the embedded space. Tasks such as anomaly and out-of-distribution detection, in which test samples comes from classes unseen in training, are problematic for deep neural networks. For such tasks, it is not sufficient to merely discriminate between known classes. Our intuition is to represent the known classes in compact and separated embedded regions in order to decrease the possibility of known and unseen classes overlapping in the embedded space. We derive a loss from Information Bottleneck principle, which reflects the inter-class distances as well as the compactness within classes, thus will extend the existing deep data description models. Our empirical study shows that the proposed model improves the segmentation of normal classes in the deep feature space, and subsequently contributes to identifying out-of-distribution samples.
翻译:本文提出利用信息理论损失进行深层代表性学习,目的是增加嵌入空间的阶级间距离和阶级内相似性;异常点和分配外检测等任务,即测试样品来自培训中看不见的阶级,对深层神经网络来说有问题;对于这些任务来说,仅仅区分已知的阶级是不够的;我们的直觉是代表密闭和分离的嵌入区域已知的阶级,以减少在嵌入空间出现已知和看不见的阶级重叠的可能性;我们从反映阶级间距离和阶级内紧凑的“信息瓶颈”原则中获得了损失,从而将扩大现有的深层数据描述模型;我们的经验研究表明,拟议的模型将改善深层地貌正常阶级的分化,并随后有助于确定分布外的样本。