Most machine learning methods depend on the tuning of hyper-parameters. For kernel ridge regression (KRR) with the Gaussian kernel, the hyper-parameter is the bandwidth. The bandwidth specifies the length-scale of the kernel and has to be carefully selected in order to obtain a model with good generalization. The default method for bandwidth selection is cross-validation, which often yields good results, albeit at high computational costs. Furthermore, the estimates provided by cross-validation tend to have very high variance, especially when training data are scarce. Inspired by Jacobian regularization, we formulate how the derivatives of the functions inferred by KRR with the Gaussian kernel depend on the kernel bandwidth. We then use this expression to propose a closed-form, computationally feather-light, bandwidth selection method based on controlling the Jacobian. In addition, the Jacobian expression illuminates how the bandwidth selection is a trade-off between the smoothness of the inferred function, and the conditioning of the training data kernel matrix. We show on real and synthetic data that compared to cross-validation, our method is considerably more stable in terms of bandwidth selection, and, for small data sets, provides better predictions.
翻译:多数机器学习方法都取决于超参数的调制。 对于高山内核的内核回归(KRR), 超参数是带宽。 带宽指定内核的长度尺度, 并且必须仔细选择, 以便获得一个具有良好概括的模型。 带宽选择的默认方法是交叉校准, 尽管计算成本很高, 但通常会产生良好的结果。 此外, 交叉校验提供的估计数往往差异很大, 特别是在培训数据稀少的情况下。 在雅各比安的正规化的启发下, 我们制定KRR与高山内核的函数的衍生物如何取决于内核的带宽。 我们然后使用这个表达式来提出一种封闭的、 计算性的羽毛光、 带宽选择方法, 以控制雅各天体为基础。 此外, 雅各布的表达方式说明了带宽选择是如何在推断的功能的平滑性与训练内核矩阵的调节之间实现权衡。 我们用真实的和合成数据显示, 与交叉校准的、 带宽度选择方法相当稳定。