This paper shows that gradient boosting based on symmetric decision trees can be equivalently reformulated as a kernel method that converges to the solution of a certain Kernel Ridgeless Regression problem. Thus, for low-rank kernels, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance. We show that the proposed sampler allows for better knowledge uncertainty estimates leading to improved out-of-domain detection.
翻译:本文表明,基于对称决策树梯度的梯度增高可以等同于一种内核方法,与解决某种内核无脊脊回归问题相汇合。 因此,对于低级内核,我们获得了与高山进程后方内核的趋同,这反过来又使我们能够很容易地将后方的梯度增高转化为从后方的采样器,通过蒙特-卡洛对后端差异的估算提供更好的知识不确定性估计。 我们表明,拟议的采样器可以提供更好的知识不确定性估计,从而改进外部探测。