Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.
翻译:少量提示是一种惊人强大的方法,可以使用较大的语言模型(LLMs)来解决各种任务。但是,当任务复杂度增加或任务的个体推理步骤本身难以学习时(尤其是嵌入到更复杂的任务中时),这种方法会遇到困难。为了解决这个问题,我们提出了基于模块化的分解提示(Decomposed Prompting),这是一种通过分解任务(通过提示)成为简单的子任务来解决复杂任务的新方法,这些子任务可以委托给专门用于这些子任务的提示型LLMs库。这种模块化结构允许针对其特定子任务优化每个提示,如果需要,进行进一步分解,甚至可以轻松地替换为更有效的提示、训练模型或符号函数。我们展示了分解提示的灵活性和模块化性使其能够优于使用GPT3的少量提示的先前工作。对于符号推理任务,我们可以进一步将对LLMs来说困难的子任务分解为更简单的可解决子任务。当复杂度来自输入长度时,我们可以将任务递归地分解为具有较小输入的相同任务。我们还在文本多步推理任务上评估了我们的方法:在长上下文多跳QA任务上,我们可以通过单独的子任务提示更有效地教授子任务;在开放领域多跳QA上,我们可以在我们的分解框架中加入符号信息检索,从而提高两个任务的性能。数据集、代码和提示可从 https://github.com/allenai/DecomP获取。