With the rise of AI in SE, researchers have shown how AI can be applied to assist software developers in a wide variety of activities. However, it has not been accompanied by a complementary increase in labelled datasets, which is required in many supervised learning methods. Several studies have been using crowdsourcing platforms to collect labelled training data in recent years. However, research has shown that the quality of labelled data is unstable due to participant bias, knowledge variance, and task difficulty. Thus, we present CodeLabeller, a web-based tool that aims to provide a more efficient approach in handling the process of labelling Java source files at scale by improving the data collection process throughout, and improving the degree of reliability of responses by requiring each labeller to attach a confidence rating to each of their responses. We test CodeLabeller by constructing a corpus of over a thousand source files obtained from a large collection of opensource Java projects, and labelling each Java source file with their respective design patterns and summaries. Apart from assisting researchers to crowdsource a labelled dataset, the tool has practical applicability in software engineering education and assists in building expert ratings for software artefacts. This paper discusses the motivation behind the creation of CodeLabeller, the intended users, a tool demonstration and its UI, its implementation, benefits, and lastly, the evaluation through a user study and in-practice usage.
翻译:在SE中,随着AI的上升,研究人员已经表明如何应用AI来帮助软件开发者开展各种各样的活动,然而,没有伴随AI而来的是贴标签的数据集的补充性增加,这是许多受监督的学习方法所要求的。一些研究近年来一直在利用众包平台来收集贴标签的培训数据;然而,研究显示,由于参与者的偏见、知识差异和任务困难,贴标签的数据质量不稳定。因此,我们介绍了CodeLabeller,这是一个基于网络的工具,目的是通过改进整个数据收集过程和提高答复的可靠性,从而提供处理标注Java源文件规模过程的更有效办法,同时要求每个标签员对每份答复进行信任评级。我们测试CoDCLabeller,方法是建立一套从大量公开源收集的Java项目中获得的1,000多个源文件,并将每个Java源文件与其各自的设计模式和摘要贴上标签。除了协助研究人员收集贴标签的数据集外,该工具在软件工程教育中具有实际适用性,并协助建立软件制品的专家评级。本文讨论了创建代码、Labeller最后使用工具的动机,以及用户在使用过程中的示范。