We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.
翻译:我们探索使用大型语言模型(LLOMs)来进行零光语义解析。语义解析包括绘制自然语言表达到特定任务含义的表达方式,语言模型一般都是在公开文本和代码上接受培训,不能指望在零光环境中直接概括到特定域的解析任务。在这项工作中,我们提议ZEROTOP,这是一个零光任务导向解析方法,将语义解析问题分解成一组抽象和采掘问题解答问题(QA),使我们能够利用LLOMs的能力来零光回答理解理解问题。对于每一个词,我们都会用与其顶级意图和一组槽相应的问题来引导LLMM,并利用LM世代来构建目标含义代表。我们发现,目前的LOMs无法发现无法解答的问题;因此,无法处理与缺失的槽有关的问题。为了解决这个问题,我们用合成负色样本来微调公共QA语言模型,不需要精确的LMM 实验结果显示我们以合成的磁性数据定位。