Deep neural models achieve some of the best results for semantic role labeling. Inspired by instance-based learning that utilizes nearest neighbors to handle low-frequency context-specific training samples, we investigate the use of memory adaptation techniques in deep neural models. We propose a parameterized neighborhood memory adaptive (PNMA) method that uses a parameterized representation of the nearest neighbors of tokens in a memory of activations and makes predictions based on the most similar samples in the training data. We empirically show that PNMA consistently improves the SRL performance of the base model irrespective of types of word embeddings. Coupled with contextualized word embeddings derived from BERT, PNMA improves over existing models for both span and dependency semantic parsing datasets, especially on out-of-domain text, reaching F1 scores of 80.2, and 84.97 on CoNLL2005, and CoNLL2009 datasets, respectively.
翻译:深神经模型在语义作用标签方面取得一些最佳结果。在实例学习中,我们利用最近的邻居处理低频环境特定培训样本,我们调查深神经模型中记忆适应技术的使用情况。我们提出一个参数化邻里记忆适应(PNAMA)方法,在激活记忆中使用近邻象征符号的参数化表示法,并根据培训数据中最相似的样本作出预测。我们从经验上表明,PNMA不断改进基础模型的SRL性能,而不论文字嵌入类型如何。与BERT的上下文化嵌入单词相结合,PNMA改进了现有的跨区域和依赖性语义解析数据集模型,特别是外文,分别达到80.2和84.97的F1分数,在CONLLL2005和CNLL2009数据集中分别达到80.2和84.97分数。