The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at https://hf.co/bigcode.
翻译:BigCode项目是一个开放的科学协作项目,致力于负责任地开发大语言代码模型。本技术报告描述了直到2022年12月为止的合作进展,概述了个人识别信息编辑管道的现状、为降低模型结构风险而进行的实验以及调查培训数据更佳预处理方法的实验。我们为Java、JavaScript和The Stack Python子集培训了1.1B参数模型,并评估了多PL-E文本到代码基准。我们发现,更积极的近于复制的过滤可以进一步提升性能,令人惊讶的是,从5+GitHub恒星储存库中选择文件会显著恶化性能。我们的最佳模型比以往的开放源多语种代码生成模型(InCoder-6.7B和DCGen-Multi-2.7B)在左向右生成和填充Java、JavaScript和多PythonE的参数部分,尽管其模型规模小得多。所有模型都根据http:// Oprelacollicolations AM。</s>