Randomized coordinate descent (RCD) is a popular optimization algorithm with wide applications in solving various machine learning problems, which motivates a lot of theoretical analysis on its convergence behavior. As a comparison, there is no work studying how the models trained by RCD would generalize to test examples. In this paper, we initialize the generalization analysis of RCD by leveraging the powerful tool of algorithmic stability. We establish argument stability bounds of RCD for both convex and strongly convex objectives, from which we develop optimal generalization bounds by showing how to early-stop the algorithm to tradeoff the estimation and optimization. Our analysis shows that RCD enjoys better stability as compared to stochastic gradient descent.
翻译:随机协调下降(RCD)是一种大众优化算法,在解决各种机器学习问题方面广泛应用,这促使人们对其趋同行为进行大量理论分析;比较而言,没有研究刚果民盟所训练的模式如何概括测试实例;在本文件中,我们利用算法稳定性的强大工具,开始对刚果民盟的概括分析;我们为刚果民盟的康韦克斯和强大的康韦克斯目标建立论据稳定性界限,我们从中找出最佳的概括界限,显示如何及早阻止算法来权衡估计和优化;我们的分析表明,刚果民盟比零碎碎的梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度更稳定。