At the same time that artificial intelligence (AI) and machine learning are becoming central to human life, their potential harms become more vivid. In the presence of such drawbacks, a critical question to address before using individual predictions for critical decision-making is whether those are reliable. Aligned with recent efforts on data-centric AI, this paper proposes a novel approach, complementary to the existing work on trustworthy AI, to address the reliability question through the lens of data. Specifically, it associates data sets with distrust quantification that specifies their scope of use for individual predictions. It develops novel algorithms for efficient and effective computation of distrust values. The proposed algorithms learn the necessary components of the measures from the data itself and are sublinear, which makes them scalable to very large and multi-dimensional settings. Furthermore, an estimator is designed to enable no-data access during the query time. Besides theoretical analyses, the algorithms are evaluated experimentally, using multiple real and synthetic data sets and different tasks. The experiment results reflect a consistent correlation between distrust values and model performance. This highlights the necessity of dismissing prediction outcomes for cases with high distrust values, at least for critical decisions.
翻译:随着人工智能(AI)和机器学习变得越来越重要,它们的潜在危害变得越来越明显。在使用个体预测进行关键决策之前,必须回答一个关键问题,那就是这些预测是否可靠。本文借鉴最新的以数据为中心的AI研究,提出了一种新颖的方法来通过数据解读可靠性问题。具体而言,它将数据集与不信任量化相关联,以指定它们用于个体预测的范围。并且本文开发了新算法来高效和有效地计算不信任值。所提出的算法从数据本身中学习所需的度量组件,并且是亚线性的,使得它们能够扩展到非常大和多维的设置中。此外,设计了一种估计器,使其可以在查询时进行无数据访问。除了理论分析,还使用多个实际和合成数据集以及不同任务进行了实验评估。实验结果反映出不信任值和模型性能之间的一致相关性。这凸显出在关键决策的情况下,高不信任值的情况下需要放弃预测结果的必要性。