While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program developers who are not proficient in English. To mitigate this gap in technology development across languages, we propose a multilingual dataset, MCoNaLa, to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNaLa) dataset, we annotated a total of 896 NL-code pairs in three languages: Spanish, Japanese, and Russian. We present a quantitative evaluation of performance on the MCoNaLa dataset by testing with state-of-the-art code generation systems. While the difficulties vary across these three languages, all systems lag significantly behind their English counterparts, revealing the challenges in adapting code generation to new languages.
翻译:虽然最近自然语言和编程语言交汇处的应用,如代码生成和代码汇总等,出现了增长,但这些应用通常以英语为中心。这给不精通英语的方案开发者造成了障碍。为了缩小各语言之间技术开发的这一差距,我们提议建立一个多语种数据集MconaLa,用超越英语的自然语言指令基准代码生成。根据英国代码/自然语言挑战(CoNala)数据集的方法,我们用三种语言(西班牙语、日语和俄语)对总共896套NL代码进行了附加说明。我们用最先进的代码生成系统测试,对M ConaLa数据集的绩效进行了定量评估。尽管这三种语言的难度各不相同,但所有系统都远远落后于其英文对应系统,暴露了将代码生成适应新语言的挑战。