Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.
翻译:大型语言模型可以通过使用思维链推导来完成各种推理任务,引导他们通过逐步演示找到答案。 但是,快速质量取决于向模型提供的演示,用手创造许多模型的成本很高。 我们引入了合成促动方法,这种方法利用几个手工制作的范例来促使模型本身产生更多的实例,并选择了有效的演示来吸引更好的推理。我们的方法在后向和前向过程之间转换,以产生新的实例。后向过程产生了一个与抽样推理链相匹配的问题,因此问题可以溶解和清晰。前向过程为问题制作了一个更为详细的推理链,提高了示例的质量。我们评估了我们关于数字、符号和算法推理任务的方法,并表明它比现有的推理技术要好。