Sequence-to-Sequence (S2S) models have achieved remarkable success on various text generation tasks. However, learning complex structures with S2S models remains challenging as external neural modules and additional lexicons are often supplemented to predict non-textual outputs. We present a systematic study of S2S modeling using contained decoding on four core tasks: part-of-speech tagging, named entity recognition, constituency and dependency parsing, to develop efficient exploitation methods costing zero extra parameters. In particular, 3 lexically diverse linearization schemas and corresponding constrained decoding methods are designed and evaluated. Experiments show that although more lexicalized schemas yield longer output sequences that require heavier training, their sequences being closer to natural language makes them easier to learn. Moreover, S2S models using our constrained decoding outperform other S2S approaches using external resources. Our best models perform better than or comparably to the state-of-the-art for all 4 tasks, lighting a promise for S2S models to generate non-sequential structures.
翻译:然而,随着外部神经模块和额外的词典经常得到补充以预测非文字产出,学习S2S模型的复杂结构仍然具有挑战性。我们提出对S2S模型的系统研究,该模型在四种核心任务上使用了包含的解码:部分语音标记、名称实体识别、选区和依赖性分析,以开发高效开发方法,成本为零额外参数。特别是,设计和评估了3个法律上多样化的线性图案和相应的限制性解码方法。实验表明,尽管更多的词汇化模型产生较长的输出序列,需要更密集的培训,但其序列更接近自然语言,使他们更容易学习。此外,S2S模型利用我们有限的解码,超越了使用外部资源的其他S2S方法。我们的最佳模型比国家技术要好,或比国家技术要好,为所有4项任务提供S2S模型创造非序列结构的希望。