Gradient Boosting Machines (GBMs) have demonstrated remarkable success in solving diverse problems by utilizing Taylor expansions in functional space. However, achieving a balance between performance and generality has posed a challenge for GBMs. In particular, gradient descent-based GBMs employ the first-order Taylor expansion to ensure applicability to all loss functions, while Newton's method-based GBMs use positive Hessian information to achieve superior performance at the expense of generality. To address this issue, this study proposes a new generic Gradient Boosting Machine called Trust-region Boosting (TRBoost). In each iteration, TRBoost uses a constrained quadratic model to approximate the objective and applies the Trust-region algorithm to solve it and obtain a new learner. Unlike Newton's method-based GBMs, TRBoost does not require the Hessian to be positive definite, thereby allowing it to be applied to arbitrary loss functions while still maintaining competitive performance similar to second-order algorithms. The convergence analysis and numerical experiments conducted in this study confirm that TRBoost is as general as first-order GBMs and yields competitive results compared to second-order GBMs. Overall, TRBoost is a promising approach that balances performance and generality, making it a valuable addition to the toolkit of machine learning practitioners.
翻译:梯度提升机(GBMs)通过在函数空间中利用泰勒展开,在解决各种问题方面取得了显着的成功。然而,在性能和通用性之间取得平衡对于GBMs是一个挑战。尤其是,基于梯度下降的GBMs使用一阶泰勒展开,以确保适用于所有损失函数,而基于牛顿法的GBMs使用正定的黑塞矩阵信息以换取性能优越性。为了解决这个问题,这项研究提出了一种新的通用梯度提升机——TRBoost。在每次迭代中,TRBoost使用约束二次模型来近似目标并应用信赖域算法来解决它并获得新的学习器。与基于牛顿法的GBMs不同,TRBoost不需要黑塞矩阵为正定,因此它可以应用于任意损失函数,同时仍保持与二阶算法相似的竞争性能。本研究进行的收敛性分析和数值实验证实,TRBoost与一阶GBMs一样通用,与二阶GBMs相比具有竞争力的结果。总的来说,TRBoost是一种平衡性能和通用性的有前途的方法,使其成为机器学习实践者工具箱中宝贵的补充。