Many natural language processing and information retrieval problems can be formalized as the task of semantic matching. Existing work in this area has been largely focused on matching between short texts (e.g., question answering), or between a short and a long text (e.g., ad-hoc retrieval). Semantic matching between long-form documents, which has many important applications like news recommendation, related article recommendation and document clustering, is relatively less explored and needs more research effort. In recent years, self-attention based models like Transformers and BERT have achieved state-of-the-art performance in the task of text matching. These models, however, are still limited to short text like a few sentences or one paragraph due to the quadratic computational complexity of self-attention with respect to input text length. In this paper, we address the issue by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for long-form document matching. Our model contains several innovations to adapt self-attention models for longer text input. In order to better capture sentence level semantic relations within a document, we pre-train the model with a novel masked sentence block language modeling task in addition to the masked word language modeling task used by BERT. Our experimental results on several benchmark datasets for long-form document matching show that our proposed SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT. Comparing to BERT based baselines, our model is able to increase maximum input text length from 512 to 2048. We will open source a Wikipedia based benchmark dataset, code and a pre-trained checkpoint to accelerate future research on long-form document matching.
翻译:许多自然语言处理和信息检索问题可以随着语义匹配任务而正式化。 这一领域的现有工作主要侧重于在短文本( 例如, 答题) 或短文本和长文本( 例如, ad- hoc 检索) 之间匹配。 长格式文档之间的语义匹配有许多重要的应用程序, 如新闻建议、 相关文章建议 和文档群集, 其探索程度相对较少, 需要更多研究努力。 近年来, 以自我关注为基础的模型, 如变换器和 BERT 等, 在文本匹配任务中实现了最先进的运行状态。 然而, 这些模型仍然局限于短文本, 如几个句或一个段落, 是因为输入文本长度的自读性计算复杂性。 在本文中,我们通过提议将西亚多深度的多深度变换换换到长式文档。 我们的模型- 内置的多级化模型- 内存模型- 内置的内置式的内置模式- 内置式的内置式文档, 将显示我们之前的内置式的内置工具, 将显示我们之前的内置式的内置的内置的内置的内置工具, 将显示我们之前的内置的内置的内置的内置的内置的内置式的内置的内置工具的内置工具, 将显示前的内置式的内置的内置工具。