Large language models have ushered in a golden age of semantic parsing. The seq2seq paradigm allows for open-schema and abstractive attribute and relation extraction given only small amounts of finetuning data. Language model pretraining has simultaneously enabled great strides in natural language inference, reasoning about entailment and implication in free text. These advances motivate us to construct ImPaKT, a dataset for open-schema information extraction, consisting of around 2500 text snippets from the C4 corpus, in the shopping domain (product buying guides), professionally annotated with extracted attributes, types, attribute summaries (attribute schema discovery from idiosyncratic text), many-to-one relations between compound and atomic attributes, and implication relations. We release this data in hope that it will be useful in fine tuning semantic parsers for information extraction and knowledge base construction across a variety of domains. We evaluate the power of this approach by fine-tuning the open source UL2 language model on a subset of the dataset, extracting a set of implication relations from a corpus of product buying guides, and conducting human evaluations of the resulting predictions.
翻译:大型语言模型已经进入了语义解析的黄金时代。 后继2Seq 模式允许开放系统、抽象属性和关系提取,只给出少量微调数据。 语言模型预培训同时在自然语言推论、引因推理和自由文本所涉影响方面迈出了巨大步伐。 这些进步激励我们建造了开放系统信息提取数据集IMPAKT, 由2500个C4系统文本片段组成, 在购物领域( 产品购买指南), 专业附加了提取属性、类型、属性摘要( 从特异性拼写文本中找到的化学特征)、复合和原子属性之间的多种一对一关系以及隐含关系。 我们发布这一数据,希望它将有助于对信息提取和在不同领域建立知识基础的语义分隔器进行精细的调整。 我们通过对一组数据集的开放源 UL2语言模型进行微调整,从一系列产品购买指南中提取一系列影响关系,并对最终的预测进行人文评估,以此评估这一方法的力量。