Recently, some mixture algorithms of pointwise and pairwise learning (PPL) have been formulated by employing the hybrid error metric of "pointwise loss + pairwise loss" and have shown empirical effectiveness on feature selection, ranking and recommendation tasks. However, to the best of our knowledge, the learning theory foundation of PPL has not been touched in the existing works. In this paper, we try to fill this theoretical gap by investigating the generalization properties of PPL. After extending the definitions of algorithmic stability to the PPL setting, we establish the high-probability generalization bounds for uniformly stable PPL algorithms. Moreover, explicit convergence rates of stochastic gradient descent (SGD) and regularized risk minimization (RRM) for PPL are stated by developing the stability analysis technique of pairwise learning. In addition, the refined generalization bounds of PPL are obtained by replacing uniform stability with on-average stability.
翻译:最近,通过采用“点向损失+对向损失”的混合误差衡量标准制定了一些精准和对称学习(PPL)的混合算法(PPL),在特征选择、排名和建议任务方面显示出了经验效力,然而,据我们所知,PPL的学习理论基础在现有工程中没有被触及,在本文件中,我们试图通过调查PPL的概括特性来填补这一理论差距。在将算法稳定性定义扩大到PPPL设置之后,我们为统一稳定的PPPL算法建立了高概率通用界限。此外,通过开发双向学习的稳定分析技术,表明了PLPL的随机梯系和常规风险最小化(RRM)的明确趋同率。此外,通过以平均稳定性取代统一稳定性,实现了PPL的精细的通用界限。