Many recent advances in machine learning are driven by a challenging trifecta: large data size $N$; high dimensions; and expensive algorithms. In this setting, cross-validation (CV) serves as an important tool for model assessment. Recent advances in approximate cross validation (ACV) provide accurate approximations to CV with only a single model fit, avoiding traditional CV's requirement for repeated runs of expensive algorithms. Unfortunately, these ACV methods can lose both speed and accuracy in high dimensions -- unless sparsity structure is present in the data. Fortunately, there is an alternative type of simplifying structure that is present in most data: approximate low rank (ALR). Guided by this observation, we develop a new algorithm for ACV that is fast and accurate in the presence of ALR data. Our first key insight is that the Hessian matrix -- whose inverse forms the computational bottleneck of existing ACV methods -- is ALR. We show that, despite our use of the \emph{inverse} Hessian, a low-rank approximation using the largest (rather than the smallest) matrix eigenvalues enables fast, reliable ACV. Our second key insight is that, in the presence of ALR data, error in existing ACV methods roughly grows with the (approximate, low) rank rather than with the (full, high) dimension. These insights allow us to prove theoretical guarantees on the quality of our proposed algorithm -- along with fast-to-compute upper bounds on its error. We demonstrate the speed and accuracy of our method, as well as the usefulness of our bounds, on a range of real and simulated data sets.
翻译:机器学习方面的许多最近进展都是由具有挑战性的三维影响驱动的:数据规模巨大;尺寸高;算法昂贵。在这一背景下,交叉校准(CV)是模型评估的一个重要工具。最近近似交叉校验(ACV)的进展为CV提供了准确的近似近似值,只有单一的模型才适合,避免了传统的CV对重复运行昂贵算法的要求。不幸的是,这些ACV方法在高维方面可能会失去速度和准确性 -- 除非数据中存在松缩结构。幸运的是,大多数数据中存在一种简化结构的替代类型:接近低级(ALR) 。根据这一观察,我们为ACV开发了一种新的算法,在ALR数据存在时速度和准确。我们的第一个关键直径矩阵中,赫萨基矩阵的反向构成现有AC方法的计算瓶颈。尽管我们使用了累增级结构,但Hesian(我们使用低级结构的精度结构)的低端近端近端点(比最小的精度)更精确的近端点(比最小的直径) 显示我们目前的关键直径直径的精确的AVI数据。