Distantly Supervised Relation Extraction (DSRE) remains a long-standing challenge in NLP, where models must learn from noisy bag-level annotations while making sentence-level predictions. While existing state-of-the-art (SoTA) DSRE models rely on task-specific training, their integration with in-context learning (ICL) using large language models (LLMs) remains underexplored. A key challenge is that the LLM may not learn relation semantics correctly, due to noisy annotation. In response, we propose HYDRE -- HYbrid Distantly Supervised Relation Extraction framework. It first uses a trained DSRE model to identify the top-k candidate relations for a given test sentence, then uses a novel dynamic exemplar retrieval strategy that extracts reliable, sentence-level exemplars from training data, which are then provided in LLM prompt for outputting the final relation(s). We further extend HYDRE to cross-lingual settings for RE in low-resource languages. Using available English DSRE training data, we evaluate all methods on English as well as a newly curated benchmark covering four diverse low-resource Indic languages -- Oriya, Santali, Manipuri, and Tulu. HYDRE achieves up to 20 F1 point gains in English and, on average, 17 F1 points on Indic languages over prior SoTA DSRE models. Detailed ablations exhibit HYDRE's efficacy compared to other prompting strategies.
翻译:远程监督关系抽取(DSRE)作为自然语言处理领域的一项长期挑战,要求模型在从带噪声的包级标注中学习的同时,完成句子级别的预测。尽管现有的最先进(SoTA)DSRE模型依赖于特定任务的训练,但其与基于大语言模型(LLM)的上下文学习(ICL)的结合仍未被充分探索。一个关键挑战在于,由于标注噪声的存在,LLM可能无法正确学习关系语义。为此,我们提出HYDRE——混合远程监督关系抽取框架。该框架首先使用训练好的DSRE模型为给定测试句子识别出前k个候选关系,随后采用一种新颖的动态示例检索策略,从训练数据中提取可靠的句子级示例,并将这些示例置于LLM提示中,以输出最终的关系。我们进一步将HYDRE扩展至低资源语言的跨语言关系抽取场景。利用现有的英语DSRE训练数据,我们在英语以及一个新构建的涵盖四种多样化低资源印度语言(奥里亚语、桑塔利语、曼尼普尔语和图鲁语)的基准测试上评估了所有方法。HYDRE在英语上取得了高达20个F1值的提升,在印度语言上平均比先前SoTA DSRE模型高出17个F1值。详细的消融实验展示了HYDRE相较于其他提示策略的有效性。