In this paper, we propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC) maximization that are applicable to deep learning. We propose new formulations of pAUC surrogate objectives by using the distributionally robust optimization (DRO) to define the loss for each individual positive data. We consider two formulations of DRO, one of which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but exact estimator for pAUC, and another one is based on a KL divergence regularized DRO that yields an inexact but smooth (soft) estimator for pAUC. For both one-way and two-way pAUC maximization, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively. Experiments demonstrate the effectiveness of the proposed algorithms for pAUC maximization for deep learning on various datasets.
翻译:在本文中,我们提出了适用于深层学习的单向和双向部分ACUC(PAUC)最大化的系统、高效梯度方法;我们建议采用分布式强优化(DRO)确定每个个人积极数据的损失,提出PAUC替代目标的新配方;我们考虑了DRO的两种配方,其中一种基于条件值风险(CVaR),为PAUC产生一个非吸附但准确的估量器;另一种基于KL常规化的DRO,为PAUC产生一个不精确但平滑(软)的估量器。对于单向和双向PAUAC的最大化,我们提出了两种算法,并证明它们合力优化两种配方。实验表明PAUC为在各种数据集进行深层学习而实现最大化的拟议算法的有效性。