Background: Recently, code generation tools such as ChatGPT have drawn attention to their performance. Generally, a prior analysis of their performance is needed to select new code-generation tools from a list of candidates. Without such analysis, there is a higher risk of selecting an ineffective tool, negatively affecting software development productivity. Additionally, conducting prior analysis of new code generation tools takes time and effort. Aim: To use a new code generation tool without prior analysis but with low risk, we propose to evaluate the new tools during software development (i.e., online optimization). Method: We apply the bandit algorithm (BA) approach to help select the best code-generation tool among candidates. Developers evaluate whether the result of the tool is correct or not. When code generation and evaluation are repeated, the evaluation results are saved. We utilize the stored evaluation results to select the best tool based on the BA approach. Our preliminary analysis evaluated five code generation tools with 164 code generation cases using BA. Result: The BA approach selected ChatGPT as the best tool as the evaluation proceeded, and during the evaluation, the average accuracy by the BA approach outperformed the second-best performing tool. Our results reveal the feasibility and effectiveness of BA in assisting the selection of best-performing code generation tools.
翻译:暂无翻译