As the number of open and shared scientific datasets on the Internet increases under the open science movement, efficiently retrieving these datasets is a crucial task in information retrieval (IR) research. In recent years, the development of large models, particularly the pre-training and fine-tuning paradigm, which involves pre-training on large models and fine-tuning on downstream tasks, has provided new solutions for IR match tasks. In this study, we use the original BERT token in the embedding layer, improve the Sentence-BERT model structure in the model layer by introducing the SimCSE and K-Nearest Neighbors method, and use the cosent loss function in the optimization phase to optimize the target output. Our experimental results show that our model outperforms other competing models on both public and self-built datasets through comparative experiments and ablation implementations. This study explores and validates the feasibility and efficiency of pre-training techniques for semantic retrieval of Chinese scientific datasets.
翻译:随着开放科学运动下互联网上开放和共享科学数据集数量的增加,高效检索这些数据集是信息检索(IR)研究的一项关键任务。近年来,开发大型模型,特别是培训前和微调范式,包括大型模型的预先培训和下游任务微调,为IR任务提供了新的解决方案。在这项研究中,我们在嵌入层使用原始的BERT标记,通过引入SimCSE和K-Nearest Neearbors方法改进模型层的句式-BERT模型结构,并利用优化阶段的同流损耗功能优化目标产出。我们的实验结果表明,我们的模型通过比较试验和调整实施,在公共和自建数据集方面优于其他相互竞争的模式。本研究探索并验证了中国科学数据集的语义检索培训前技术的可行性和效率。