Automata-based representations play an important role in control and planning in sequential decision-making, but obtaining high-level task knowledge for building automata is often difficult. Although large-scale generative language models (GLMs) can help automatically distill task knowledge, the textual outputs from GLMs are not amenable for formal verification or use in sequential decision-making. We propose a novel algorithm named GLM2FSA, which obtains high-level task knowledge represented in a finite state automaton (FSA) from a given brief description of the task goal. GLM2FSA sends queries to a GLM for task knowledge in textual form and then builds an FSA to represent the textual knowledge. It fills the gap between text and automata-based representations, and the constructed FSA can be directly utilized in formal verification. We provide an algorithm for iteratively refining the queries to the GLM based on the outcomes, e.g., counter-examples, from verification. We demonstrate the algorithm on examples that range from everyday tasks, e.g., crossing a road and making coffee, to security applications to laboratory safety protocols.
翻译:在连续决策的控制和规划方面,基于自动的表示方式在控制和规划方面起着重要作用,但为建立自动的表示方式而获得高级任务知识往往是困难的。虽然大规模基因化语言模型(GLMs)可以帮助自动提炼任务知识,但GLMS的文字输出不适于正式核查,或用于顺序决策。我们提出了一个名为GLM2FSA的新型算法,它从对任务目标的简单描述中获得了限定州自治(FSA)的高层次任务知识。GLM2FSA向GLM发出关于文本形式任务知识的查询,然后建立一个FSA来代表文本知识。它填补了文本和基于自动的表示方式之间的空白,而构建的FSA可以在正式核查中直接使用。我们提供了一种根据结果,例如反抽样,从核查中反复地改进对GLM的查询方式。我们展示从日常任务,例如跨越道路和制造咖啡,到安全应用到实验室安全议定书等实例的算法。