We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for $n$ data points requires $\Omega(n^2)$ similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystr\"{o}m method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-$s$ approximation with just $O(ns)$ similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in the downstream tasks of document classification, sentence similarity, and cross-document coreference.
翻译:我们研究自然语言处理中产生的近似相似矩阵的算法。 一般来说, 为美元数据点计算相似矩阵需要$\ Omega( n2) 美元相似计算。 这种二次缩放是一个很大的瓶颈, 特别是当通过昂贵的功能(例如变压器模型)来计算相似性时。 相似性方法减少了这种四重复杂性, 通常使用微小的一组精确计算相似性来接近完整对齐相似矩阵的剩余部分。 我们的算法可以适用于任何相似性半质( PSD) 类似矩阵的高效近似性, 例如, 在内核计算方法中出现。 然而, 对于不定期( 非- PSD) 相似性矩阵的缩略性理解要少得多, 而在Nystries\"{o}m 方法上, 我们的缩略图可以应用于任何相似的基质矩阵, 在表的次直线时间里, 显示一个相似的缩略图, 和我们排序的缩略图的缩略图, 显示一个类似于的缩略图的缩缩略图。