Over the past few years, improving LLM code generation capabilities has been a key focus in NLP research. Despite Bengali having 242 million native speakers worldwide, it receives little attention when it comes to training LLMs. More recently, various fine-tuning and augmented generation techniques have been employed to significantly enhance code generation performance. However, they require considerable expertise and resources to utilize effectively as an end user. The goal of our work is to democratize access to powerful code generation tools in resource-constrained emerging markets, enabling users to leverage them in their native language. We introduce a novel approach that combines Test-Driven Development (TDD) and Code Interpreter (CI), utilizing open-weight models, which improves the baseline accuracy for code generation with Bengali prompts and achieves an overall accuracy of 85%. Our approach requires no finetuning and proves that even the smallest models in the same family can attain up to 98% accuracy compared to the largest models. All of our results are publicly shared in GitHub for validation and reproducibility.
翻译:过去几年中,提升大语言模型(LLM)的代码生成能力一直是自然语言处理(NLP)研究的重点。尽管孟加拉语在全球拥有2.42亿母语使用者,但在训练大语言模型方面却鲜有关注。近期,各种微调和增强生成技术被用于显著提升代码生成性能,但这些方法需要最终用户具备相当的专业知识和资源才能有效利用。本研究的目标是在资源受限的新兴市场中普及强大的代码生成工具,使用户能够以其母语使用这些工具。我们提出了一种结合测试驱动开发(TDD)与代码解释器(CI)的新方法,利用开放权重模型,将孟加拉语提示的代码生成基线准确率提升至85%。该方法无需微调,并证明同一模型系列中最小的模型相比最大模型也能达到高达98%的准确率。我们所有的结果均在GitHub上公开分享,以供验证和复现。