We study the problem of discovering joinable datasets at scale. This is, how to automatically discover pairs of attributes in a massive collection of independent, heterogeneous datasets that can be joined. Exact (e.g., based on distinct values) and hash-based (e.g., based on locality-sensitive hashing) techniques require indexing the entire dataset, which is unattainable at scale. To overcome this issue, we approach the problem from a learning perspective relying on profiles. These are succinct representations that capture the underlying characteristics of the schemata and data values of datasets, which can be efficiently extracted in a distributed and parallel fashion. Profiles are then compared, to predict the quality of a join operation among a pair of attributes from different datasets. In contrast to the state-of-the-art, we define a novel notion of join quality that relies on a metric considering both the containment and cardinality proportions between candidate attributes. We implement our approach in a system called NextiaJD, and present extensive experiments to show the predictive performance and computational efficiency of our method. Our experiments show that NextiaJD obtains similar predictive performance to that of hash-based methods, yet we are able to scale-up to larger volumes of data. Also, NextiaJD generates a considerably less amount of false positives, which is a desirable feature at scale.
翻译:我们研究在规模上发现可合并数据集的问题。 也就是说, 如何在大规模收集可合并的独立、 多元数据集的过程中自动发现一对属性。 精确( 例如, 基于不同值) 和基于散列( 例如, 基于对地点敏感的散列) 的技术需要将整个数据集索引化, 这是规模上无法达到的。 为了克服这个问题, 我们从学习的角度出发, 依靠剖面来解决这个问题。 这些简明的表述可以捕捉数据集的规划和数据值的基本特征, 并且可以以分布和平行的方式有效地提取。 然后比较剖面, 预测不同数据集的一对属性的合并操作质量。 与目前的状况相反, 我们定义了一个新的组合质量概念, 既考虑候选人属性之间的封闭度, 也考虑候选人属性之间的最密切关系。 我们在一个称为NextiaJD的系统中实施我们的方法, 并展示我们方法的预测性能和计算效率的广泛实验。 我们的实验显示, 未来JD的大规模性能使得下一个阶段的性能更小, 我们的性能更小地预测到下一个阶段。