We study the problem of integrating syntactic information from constituency trees into a neural model in Frame-semantic parsing sub-tasks, namely Target Identification (TI), FrameIdentification (FI), and Semantic Role Labeling (SRL). We use a Graph Convolutional Network to learn specific representations of constituents, such that each constituent is profiled as the production grammar rule it corresponds to. We leverage these representations to build syntactic features for each word in a sentence, computed as the sum of all the constituents on the path between a word and a task-specific node in the tree, e.g. the target predicate for SRL. Our approach improves state-of-the-art results on the TI and SRL of ~1%and~3.5% points, respectively (+2.5% additional points are gained with BERT as input), when tested on FrameNet 1.5, while yielding comparable results on the CoNLL05 dataset to other syntax-aware systems.
翻译:我们研究将选区树的合成信息纳入框架-语义分类子任务(即目标识别(TI)、框架识别(FI)和语义角色标签(SRL))的神经模型的问题。 我们使用图表革命网络来了解具体成分的表达方式, 使每个成分都以其与生产语法规则相匹配的方式得到剖析。 我们利用这些表达方式为句子中的每个词构建合成特征, 计算成树中单词和任务特定节点之间路径上的所有成分的总和, 如SRL的目标上游。 我们的方法改进了TI和SRL的最新结果, 分别是~1%和~3.5%的点( +2.5%的额外点由BERT作为输入获得 ), 在框架网1.5 测试时, 并在CONLLL05 数据集与其他合成系统产生可比结果 。