Human reading comprehension often requires reasoning of event semantic relations in narratives, represented by Event-centric Question-Answering (QA). To address event-centric QA, we propose a novel QA model with contrastive learning and invertible event transformation, call TranCLR. Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines. The transformation matrix is fine-tuned with the annotated event relation types between events that occurred in questions and those in answers, using event-aware question vectors. Experimental results on the Event Semantic Relation Reasoning (ESTER) dataset show significant improvements in both generative and extractive settings compared to the existing strong baselines, achieving over 8.4% gain in the token-level F1 score and 3.0% gain in Exact Match (EM) score under the multi-answer setting. Qualitative analysis reveals the high quality of the generated answers by TranCLR, demonstrating the feasibility of injecting event knowledge into QA model learning. Our code and models can be found at https://github.com/LuJunru/TranCLR.
翻译:人类阅读理解往往要求将事件语义关系在叙事中的语义关系推理,由以事件为中心的问题解答(QA)代表。为了处理以事件为中心的QA,我们提议了一个新型QA模型,以对比性学习和不可逆事件变换为新颖的QA模型,称为TranClLR。我们提议的模型使用一个不可忽略的转换矩阵,将事件的语义矢量投射成一个共同事件嵌入空间,经过对比性学习培训,从而自然将事件语义知识注入主流QA管道。变换矩阵与在问题和答案中发生的事件之间附带说明的事件类型的关系进行了微调。我们使用事件感知质问题矢量器(QA)提出了一个新的QA模型。与现有的强基线相比,在变异性和采掘性环境方面都取得了显著的改进,在象征性F1分中获得了8.4%以上的收益,在多解答设置下,在Exact Match(EM)评分中获得了3.0%的收益。定性分析显示TranCLLLLLR/M的模型得出了高质量的答案,并展示了我们输入事件的可行性。