Due to the usefulness in data enrichment for data analysis tasks, joinable table discovery has become an important operation in data lake management. Existing approaches target equi-joins, the most common way of combining tables for creating a unified view, or semantic joins, which tolerate misspellings and different formats to deliver more join results. They are either exact solutions whose running time is linear in the sizes of query column and target table repository or approximate solutions lacking precision. In this paper, we propose Deepjoin, a deep learning model for accurate and efficient joinable table discovery. Our solution is an embedding-based retrieval, which employs a pre-trained language model (PLM) and is designed as one framework serving both equi- and semantic joins. We propose a set of contextualization options to transform column contents to a text sequence. The PLM reads the sequence and is fine-tuned to embed columns to vectors such that columns are expected to be joinable if they are close to each other in the vector space. Since the output of the PLM is fixed in length, the subsequent search procedure becomes independent of the column size. With a state-of-the-art approximate nearest neighbor search algorithm, the search time is logarithmic in the repository size. To train the model, we devise the techniques for preparing training data as well as data augmentation. The experiments on real datasets demonstrate that by training on a small subset of a corpus, Deepjoin generalizes to large datasets and its precision consistently outperforms other approximate solutions'. Deepjoin is even more accurate than an exact solution to semantic joins when evaluated with labels from experts. Moreover, when equipped with a GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.
翻译:由于数据浓缩对数据分析任务的有用性,可加入的表格发现已成为数据湖管理中的一个重要操作。 现有方法针对equijoin, 这是创建统一视图或语义组合的最常用的组合表格, 能够容忍拼错和不同格式以交付更多合并结果。 它们要么是精确的解决方案, 其运行时间在查询列和目标表储存的大小上线性, 或缺乏精确性。 在本文中, 我们提议Deepjoin, 一个用于准确和高效的共进表格发现的一个深学习模型。 我们的解决方案是嵌入基于嵌入的检索, 使用预先训练的语言模型( PLM), 并设计成一个同时提供电子和语义组合组合的架构。 我们提出一套背景化选项, 将列内容转换成文本序列。 运行时间序列是嵌入现有矢量的柱, 如果在矢量空间中彼此接近, 则可以加入。 由于 PLM 的输出时间长度超过两个, 以后的搜索程序将脱离列内层的深度语言模型的大小模型,, 将是一个最接近的轨道搜索程序, 当我们进行内部的轨中的数据分析时, 将用最接近的轨中的数据系统进行更精确的搜索, 。 。 当我们进行更精确的轨中, 将数据采集的轨道的轨中的数据分析, 将显示, 的轨道的轨道的轨中的数据序列中的数据 将进行更精确的轨中, 。