This paper studies the robustness of data valuation to noisy model performance scores. Particularly, we find that the inherent randomness of the widely used stochastic gradient descent can cause existing data value notions (e.g., the Shapley value and the Leave-one-out error) to produce inconsistent data value rankings across different runs. To address this challenge, we first pose a formal framework within which one can measure the robustness of a data value notion. We show that the Banzhaf value, a value notion originated from cooperative game theory literature, achieves the maximal robustness among all semivalues -- a class of value notions that satisfy crucial properties entailed by ML applications. We propose an algorithm to efficiently estimate the Banzhaf value based on the Maximum Sample Reuse (MSR) principle. We derive the lower bound sample complexity for Banzhaf value estimation, and we show that our MSR algorithm's sample complexity is close to the lower bound. Our evaluation demonstrates that the Banzhaf value outperforms the existing semivalue-based data value notions on several downstream ML tasks such as learning with weighted samples and noisy label detection. Overall, our study suggests that when the underlying ML algorithm is stochastic, the Banzhaf value is a promising alternative to the semivalue-based data value schemes given its computational advantage and ability to robustly differentiate data quality.
翻译:本文研究了数据估值对于吵闹模型性能分数的稳健性。 特别是,我们发现,广泛使用的随机性梯度梯度梯度下降的固有随机性可导致现有数据价值概念(例如,沙佩利值和放出一个错误)产生不同运算的数据价值排名不一致。 为了应对这一挑战,我们首先提出一个正式框架,在这个框架内,人们可以测量数据价值概念的稳健性。我们显示,Banzhaf值,一个源于合作游戏理论文献的价值概念,在所有半值之间实现最强的稳健性 -- -- 一种满足ML应用的关键特性的价值概念。我们提出一种算法,以便根据最大样品再使用原则(MSR)有效估计Banzaf值的价值。我们得出Banzaf值估算的较低约束性样本复杂性,我们表明,我们的MSR算法的抽样复杂性接近于较低约束性概念。 我们的评估表明,Banzhaf值高于现有的半值数据价值概念,例如学习加权样品和热度标签检测等关键特性。我们提出的算算的模型和深点质量评估能力是,当Banzualalalalal- dalevalalal- sqaleval supal sqalchealviewal des值值值值值时,我们的一项研究提出一个可靠的数据基础数据价值基础是,它作为基础数据与Ban- squalvalvalvalvalvalvalvalview。