We derive a simple unified framework giving closed-form estimates for the test risk and other generalization metrics of kernel ridge regression (KRR). Relative to prior work, our derivations are greatly simplified and our final expressions are more readily interpreted. These improvements are enabled by our identification of a sharp conservation law which limits the ability of KRR to learn any orthonormal basis of functions. Test risk and other objects of interest are expressed transparently in terms of our conserved quantity evaluated in the kernel eigenbasis. We use our improved framework to: i) provide a theoretical explanation for the "deep bootstrap" of Nakkiran et al (2020), ii) generalize a previous result regarding the hardness of the classic parity problem, iii) fashion a theoretical tool for the study of adversarial robustness, and iv) draw a tight analogy between KRR and a well-studied system in statistical physics.
翻译:我们得出了一个简单的统一框架,对试验风险和内核脊回归(KRR)的其他一般度量进行封闭式估计。与以前的工作相比,我们得出的结果大大简化,我们最后的表达方式更便于解释。这些改进是因为我们确定了一项尖锐的养护法,它限制了KRR学习任何正常功能基础的能力。试验风险和其他感兴趣的对象以内核精核中评估的受保护数量的方式以透明的方式表达。我们使用改进的框架来:(一) 为Nakkiran等人(202020年)的“深靴子陷阱”提供理论解释,(二) 概括以往关于典型对等问题强硬性的结果,(三) 利用理论工具研究对抗性稳健性,以及(四) 将KRR与统计物理中经过仔细研究的系统进行密切的类比。