We present a benchmark dataset for evaluating method-level code generation task. The benchmark contains a dataset of 175 samples for automated evaluation and a dataset of 161 samples for manual evaluation. We also present a new metric for automatically evaluating the correctness of the generated code, and a set of criteria to manually evaluating the overall quality of the generated code.
翻译:我们提出了一个用于评价方法级代码生成任务的基准数据集,其中包括175个用于自动评估的样本数据集和161个用于手工评估的样本数据集;我们还提出了一套用于自动评估生成代码正确性的新指标,以及一套用于人工评估生成代码总体质量的标准。