Automata-based representations play an important role in control and planning in sequential decision-making, but obtaining high-level task knowledge for building automata is often difficult. Although large-scale generative language models (GLMs) can help automatically distill task knowledge, the textual outputs from GLMs are not directly utilizable in sequential decision-making. We resolve this problem by proposing a novel algorithm named GLM2FSA, which obtains high-level task knowledge, represented in a finite state automaton (FSA), from a given brief description of the task goal. GLM2FSA sends queries to a GLM for task knowledge in textual form and then builds a FSA to represent the textual knowledge. This algorithm fills the gap between text and automata-based representations, and the constructed FSA can be directly utilized in sequential decision-making. We provide examples to demonstrate how GLM2FSA constructs FSAs to represent knowledge encoded in the texts generated by the large-scale GLMs.
翻译:以自动为基础的代表机构在连续决策的控制和规划方面发挥着重要作用,但为建立自制数据获取高级任务知识往往很困难。虽然大规模基因变异语言模型(GLMs)可以帮助自动提炼任务知识,但GLLMM的文字输出不能直接用于顺序决策。我们通过提出名为GLM2FSA的新式算法来解决这一问题,该算法从对任务目标的简单描述中获得以有限状态自动算法(FSA)代表的高级任务知识。GLM2FSA向GLM发送任务知识查询,然后建立FSA以表示文字知识。这种算法填补了文本和基于自动的表述之间的空白,而构建的FSA可以在连续决策中直接使用。我们举例说明GLM2FSA如何构建FSA,以在大型GLMs生成的文本中体现知识。