Recently, deep learning-based techniques have shown promising performance on various tasks related to software engineering. For these learning-based approaches to perform well, obtaining high-quality data is one fundamental and crucial issue. The comment updating task is an emerging software engineering task aiming at automatically updating the corresponding comments based on changes in source code. However, datasets for the comment updating tasks are usually crawled from committed versions in open source software repositories such as GitHub, where there is lack of quality control of comments. In this paper, we focus on cleaning existing comment updating datasets with considering some properties of the comment updating process in software development. We propose a semantic and overlapping-aware approach named CupCleaner (Comment UPdating's CLEANER) to achieve this purpose. Specifically, we calculate a score based on semantics and overlapping information of the code and comments. Based on the distribution of the scores, we filter out the data with low scores in the tail of the distribution to get rid of possible unclean data. We first conducted a human evaluation on the noise data and high-quality data identified by CupCleaner. The results show that the human ratings of the noise data identified by CupCleaner are significantly lower. Then, we applied our data cleaning approach to the training and validation sets of three existing comment updating datasets while keeping the test set unchanged. Our experimental results show that even after filtering out over 30\% of the data using CupCleaner, there is still an improvement in all performance metrics. The experimental results on the cleaned test set also suggest that CupCleaner may provide help for constructing datasets for updating-related tasks.
翻译:暂无翻译