Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks.
翻译:少见的提示是使用大语言模型(LLMS)解决各种任务的惊人的强大方法。 但是,随着任务复杂性的增加,或者任务本身的单个推理步骤本身难以学习,特别是嵌入更复杂的任务中,这一方法非常困难。 为了解决这个问题,我们建议分解“催化”这个新的方法,通过将这些任务(通过催化)分解成更简单的子任务来解决复杂任务,可以委托给一个图书馆,用于专门为这些子任务而促进基于大语言模型的LLMS。这个模块化结构使得每一个快速的子任务都能够优化其具体的子任务,必要时进一步分解,甚至容易被更高效的提示、经过培训的模型或象征性功能所取代。为了解决这个问题,我们建议“催化”这个新的方法,通过将它们(通过催化)分解成更简单的子任务,我们可以进一步将对于LLMMS来说很困难的子任务转换成更简单的可改进的子任务。 当复杂性来自输入长度时,我们可以将多步方法有效地分解任务中我们的工作。