This paper shows that gradient boosting based on symmetric decision trees can be equivalently reformulated as a kernel method that converges to the solution of a certain Kernel Ridge Regression problem. Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance. We show that the proposed sampler allows for better knowledge uncertainty estimates leading to improved out-of-domain detection.
翻译:本文表明,基于对称决定树梯度的梯度增高可以等同于一种内核方法,与某种内核方法趋同于解决某种内核脊回归问题。 因此,我们获得了与高山进程后方平均值的趋同,这反过来又使我们能够很容易地将后方梯度增高转化为从后方的采样器,以便通过蒙特-卡洛对后方差异的估计提供更好的知识不确定性估计。 我们表明,拟议的采样器可以提供更好的知识不确定性估计,从而改进外表探测。</s>