Statistical tests for dataset shift are susceptible to false alarms: they are sensitive to minor differences when there is in fact adequate sample coverage and predictive performance. We propose instead a framework to detect adverse dataset shifts based on outlier scores, $\texttt{D-SOS}$ for short. $\texttt{D-SOS}$ holds that the new (test) sample is not substantively worse than the reference (training) sample, and not that the two are equal. The key idea is to reduce observations to outlier scores and compare contamination rates at varying weighted thresholds. Users can define what $\it{worse}$ means in terms of relevant notions of outlyingness, including proxies for predictive performance. Compared to tests of equal distribution, our approach is uniquely tailored to serve as a robust metric for model monitoring and data validation. We show how versatile and practical $\texttt{D-SOS}$ is on a wide range of real and simulated data.
翻译:数据集转换的统计测试很容易受到假警报的影响:当事实上有足够的抽样覆盖面和预测性能时,它们敏感地注意微小差异。我们提议了一个框架,用以检测基于外部分数的不利数据集变化($\textt{D-SOS}$),用于短期。 $\textt{D-SOS}$认为,新的(测试)样本并不比参考(培训)样本差很多,而不是两者相等。关键的想法是减少对外部分数的观察,并在不同的加权阈值下比较污染率。用户可以确定在相关外围概念方面$(it{worse}$)意味着什么,包括预测性能的代理值。与平均分布的测试相比,我们的方法具有独特性,可以作为模型监测和数据验证的有力指标。我们展示的是,在各种真实和模拟数据上,有多么灵活和实用的 $\textt{D-SOS}。