Existing popular unsupervised embedding learning methods focus on enhancing the instance-level local discrimination of the given unlabeled images by exploring various negative data. However, the existed sample outliers which exhibit large intra-class divergences or small inter-class variations severely limit their learning performance. We justify that the performance limitation is caused by the gradient vanishing on these sample outliers. Moreover, the shortage of positive data and disregard for global discrimination consideration also pose critical issues for unsupervised learning but are always ignored by existing methods. To handle these issues, we propose a novel solution to explicitly model and directly explore the uncertainty of the given unlabeled learning samples. Instead of learning a deterministic feature point for each sample in the embedding space, we propose to represent a sample by a stochastic Gaussian with the mean vector depicting its space localization and covariance vector representing the sample uncertainty. We leverage such uncertainty modeling as momentum to the learning which is helpful to tackle the outliers. Furthermore, abundant positive candidates can be readily drawn from the learned instance-specific distributions which are further adopted to mitigate the aforementioned issues. Thorough rationale analyses and extensive experiments are presented to verify our superiority.
翻译:现有不受监督的现有流行嵌入式学习方法侧重于通过探索各种负面数据,加强对未贴标签图像的实地歧视,重点是通过探索各种负面数据,加强在实例一级对未贴标签图像的当地歧视。然而,现有的抽样外出者显示的阶级内部差异较大或阶级间差异较小,严重限制了他们的学习成绩。我们有理由认为,业绩限制是由于这些抽样外出者的梯度消失造成的。此外,正数据短缺和无视全球歧视考虑也为未经监督的学习提出了关键问题,但总是被现有方法所忽视。为了处理这些问题,我们提出了一个新的解决办法,以明确建模和直接探索未贴标签学习样本的不确定性。我们建议,除了为嵌入空间的每个样本学习一个确定性特征点之外,还要用一个微小的标子代表一个样本,用显示其空间定位和常态矢量代表样本不确定性的平均值。我们利用这种不确定性的模型作为学习的动力,以帮助解决外层。此外,大量的积极候选人可以随时从为进一步缓解上述问题而采用的具体实例分布中提取。我们提出了大量的理由分析和广泛的实验。