The Column Subset Selection Problem (CSSP) and the Nystr\"om method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing. A fundamental question in this area is: how well can a data subset of size k compete with the best rank k approximation? We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis. Our approach leads to significantly better bounds for datasets with known rates of singular value decay, e.g., polynomial or exponential decay. Our analysis also reveals an intriguing phenomenon: the approximation factor as a function of k may exhibit multiple peaks and valleys, which we call a multiple-descent curve. A lower bound we establish shows that this behavior is not an artifact of our analysis, but rather it is an inherent property of the CSSP and Nystr\"om tasks. Finally, using the example of a radial basis function (RBF) kernel, we show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
翻译:列子集选择问题 (CSSP) 和 Nystr\\"om 方法是建立机器学习和科学计算中大型数据集小型低端近似值的主要工具之一。 这个领域的一个基本问题是: 大小 k 的数据子集与最高级 kspoint 竞争有多大? 我们开发了利用数据矩阵光谱属性获得更好的近近似保障的技术,这些光谱属性超出了标准的最坏情况分析。 我们的方法使得已知的单值衰减率( 如多元或指数衰减)的数据集的界限大得多。 我们的分析还揭示了一种令人感兴趣的现象: k 的近似系数可能显示多个峰值和谷值, 我们称之为多色曲线。 我们更低的界限显示, 这种行为不是我们分析的产物, 而是CSSP 和 Nystr\\\" om 任务的固有属性。 最后, 我们用一个光基函数( RBF) 的例子显示, 我们改进的边框和多色曲线都可以在真实的参数上观察到。