We propose a new method for event extraction (EE) task based on an imitation learning framework, specifically, inverse reinforcement learning (IRL) via generative adversarial network (GAN). The GAN estimates proper rewards according to the difference between the actions committed by the expert (or ground truth) and the agent among complicated states in the environment. EE task benefits from these dynamic rewards because instances and labels yield to various extents of difficulty and the gains are expected to be diverse -- e.g., an ambiguous but correctly detected trigger or argument should receive high gains -- while the traditional RL models usually neglect such differences and pay equal attention on all instances. Moreover, our experiments also demonstrate that the proposed framework outperforms state-of-the-art methods, without explicit feature engineering.
翻译:我们根据模拟学习框架,具体地说,通过基因对抗网络(GAN)进行反强化学习(IRL),提出新的事件提取方法(EEE)任务。GAN根据专家(或地面真相)和环境复杂国家的代理人之间的不同行动来估计适当的报酬。EEE任务受益于这些动态奖励,因为事例和标签产生不同程度的困难,而且预期收益会多种多样 -- -- 例如,一个模糊但正确发现的触发器或论据应获得很高的收益 -- -- 而传统的RL模型通常忽视这种差异,并在一切情况下给予同等重视。此外,我们的实验还表明,拟议的框架在没有明确特征工程的情况下,优于最先进的方法。