Story generation and understanding -- as with all NLG/NLU tasks -- has seen a surge in neurosymbolic work. Researchers have recognized that, while large language models (LLMs) have tremendous utility, they can be augmented with symbolic means to be even better and to make up for any flaws that the neural networks might have. However, symbolic methods are extremely costly in terms of the amount of time and expertise needed to create them. In this work, we capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories and aiding in story understanding. We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques on pre-existing story understanding tasks (bAbI task 2 and Re^3) with minimal hand engineering. We hope that this work can help highlight the importance of symbolic representations and specialized prompting for LLMs as these models require some guidance for performing reasoning tasks properly.
翻译:与所有NLG/NLU任务一样,故事的产生和理解 -- -- 与所有NLG/NLU任务一样 -- -- 都出现了神经精神共振工作的激增。研究人员已经认识到,虽然大型语言模型(LLMs)具有巨大的效用,但可以用象征性手段加以扩大,使之更好,弥补神经网络可能存在的任何缺陷。然而,象征性方法在创造这些模型所需的时间和专门知识方面成本极高。在这项工作中,我们利用最新的代码LLMs(如Codx),用靴子圈套使用象征性方法跟踪故事状况和协助了解故事。我们表明,我们的CORPUS系统和抽象的提示程序能够以最低限度的手力工程,在原有的故事理解任务(bAbI任务2和Re3)上击败目前最先进的结构化LLM技术。我们希望这项工作能够帮助突出象征性的表述和专门提示LLMs的重要性,因为这些模型需要一些指导来正确执行推理任务。