Code generation is to automatically generate source code conforming to a given programming specification, which has received extensive attention especially with the development of large language models (LLMs). Due to the inherent difficulty of code generation, the code generated by LLMs may be not aligned with the specification. To improve the performance of LLMs in code generation, some thought-eliciting prompting techniques have been proposed to guide LLMs for specification understanding. However, it is still hard to produce correct understanding for complicated programming problems, leading to unsatisfactory code generation performance. Also, some feedback-based prompting techniques have been proposed to fix incorrect code using error messages produced by test execution. However, when the generated code deviates significantly from the ground truth, they encounter difficulties in improving performance based on such coarse-grained information. In this work, we propose a novel prompting technique, called {\mu}FiX, to improve the code generation performance of LLMs by devising both sophisticated thought-eliciting prompting and feedback-based prompting and making the first exploration on their synergy. It first exploits test case analysis to obtain specification understanding and enables a self-improvement process to identify and fix the misunderstanding in the thought-eliciting prompting phase. {\mu}FiX further fixes the specification understanding towards the direction reducing the gap between the provided understanding and the actual understanding implicitly utilized by LLMs for code generation in the feedback-based prompting phase. By obtaining as correct understanding as possible with {\mu}FiX, the code generation performance of LLMs can be largely improved.
翻译:暂无翻译