Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.
翻译:在自然语言处理和信息检索等许多应用中,自然会出现数量非常多的软质分类器,因此从计算和能量角度计算,完全软质的计算成本高昂。为了克服这一挑战,已经采取了各种抽样方法,通常称为负抽样(NS)。理想的是,NS应该从依赖输入数据、当前参数和正确正值等级的分布中抽样分析负分类。不幸的是,由于动态更新参数和数据样本,没有可灵活适应的抽样办法,也没有有效抽样的负面分类。因此,其他的超常办法,如随机抽样、静态频率抽样或学习偏差抽样,主要交易取样成本或样品按迭代的适应性。在本文件中,我们显示了两种分布,即抽样办法真正适应性强,在接近一致的时间里可产生负样品。我们在CPU上的C+在墙时钟和准确度方面,与在强大的VIVI100全球定位系统上最优化的其他流行负面取样方法的Tensorlow实施率相当高。