Event mentions in text correspond to real-world events of varying degrees of granularity. The task of subevent detection aims to resolve this granularity issue, recognizing the membership of multi-granular events in event complexes. Since knowing the span of descriptive contexts of event complexes helps infer the membership of events, we propose the task of event-based text segmentation (EventSeg) as an auxiliary task to improve the learning for subevent detection. To bridge the two tasks together, we propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction, as well as guiding the model to make globally consistent inference. Specifically, we adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model. Experimental results show that the proposed method outperforms baseline methods by 2.3% and 2.5% on benchmark datasets for subevent detection, HiEve and IC, respectively, while achieving a decent performance on EventSeg prediction.
翻译:在文本中提及的事件与不同程度的颗粒度的真实世界事件相对应。 次事件探测的任务是解决这个颗粒度问题,承认在复杂情况下多发性事件的成员身份。由于了解事件复合体描述背景的范围有助于推断事件的组成情况,我们提议以事件为基础的文字分解(EventSeg)任务为辅助任务,以改进对次事件检测的学习。为了将这两项任务结合起来,我们提议了一种学习和执行制约因素的方法,以捕捉子事件探测和事件比对预测之间的依赖性,以及指导模型,以便作出全球一致的推断。具体地说,我们采用了校正者网络,以学习制约性知识,然后将所学到的限制因素转换为神经模型损失功能中的正规化术语。实验结果显示,拟议的方法在次事件检测的基准数据集、HiEve和IC上分别优于2.3%和2.5%的基线方法,同时在事件比对事件比预测取得体面的业绩。