Language models (LLMs) offer potential as a source of knowledge for agents that need to acquire new task competencies within a performance environment. We describe efforts toward a novel agent capability that can construct cues (or "prompts") that result in useful LLM responses for an agent learning a new task. Importantly, responses must not only be "reasonable" (a measure used commonly in research on knowledge extraction from LLMs) but also specific to the agent's task context and in a form that the agent can interpret given its native language capacities. We summarize a series of empirical investigations of prompting strategies and evaluate responses against the goals of targeted and actionable responses for task learning. Our results demonstrate that actionable task knowledge can be obtained from LLMs in support of online agent task learning.
翻译:语言模型(LLMs)为需要在一个业绩环境中获得新任务能力的代理人提供了潜在的知识来源。我们描述了为建立新代理能力所作的努力,这种能力可以建立提示(或“即时”),从而导致对代理人学习新任务作出有益的LLM反应。重要的是,反应不仅必须“合理”(在研究从LLMs提取知识时通常使用的一种措施),而且必须针对代理人的任务背景,并以代理人能够根据自己的语言能力解释的形式。我们总结了一系列经验性调查,对促进战略进行迅速调查,并对针对任务学习有针对性和可采取行动的对策的目标作出的反应进行评价。我们的结果表明,可从LLMs获得可操作的任务知识,以支持在线代理任务学习。