One major challenge in machine learning applications is coping with mismatches between the datasets used in the development and those obtained in real-world applications. These mismatches may lead to inaccurate predictions and errors, resulting in poor product quality and unreliable systems. In this study, we propose StyleDiff to inform developers of the differences between the two datasets for the steady development of machine learning systems. Using disentangled image spaces obtained from recently proposed generative models, StyleDiff compares the two datasets by focusing on attributes in the images and provides an easy-to-understand analysis of the differences between the datasets. The proposed StyleDiff performs in $O (d N\log N)$, where $N$ is the size of the datasets and $d$ is the number of attributes, enabling the application to large datasets. We demonstrate that StyleDiff accurately detects differences between datasets and presents them in an understandable format using, for example, driving scenes datasets.
翻译:在机器学习应用程序中,一个重大挑战是应对开发中使用的数据集与现实应用中获取的数据集之间的不匹配。这些不匹配可能导致不准确的预测和错误,导致产品质量差和系统不可靠。在本研究中,我们建议StyleDiff向开发者通报两个数据集之间的差异,以便稳步开发机器学习系统。利用最近提议的基因化模型获得的分解图像空间,StyleDiff通过关注图像属性来比较这两个数据集,并对数据集之间的差异进行易于理解的分析。提议的StyleDiff以$O(d N\log N) 计算,其中美元是数据集的大小,美元是属性的数量,使应用程序能够用于大型数据集。我们证明StyleDiff精确地检测数据集之间的差异,并以易懂的格式展示,例如,驱动场景数据集。</s>