Toxic conversations during software development interactions may have serious repercussions on a Free and Open Source Software (FOSS) development project. For example, victims of toxic conversations may become afraid to express themselves, therefore get demotivated, and may eventually leave the project. Automated filtering of toxic conversations may help a FOSS community to maintain healthy interactions among its members. However, off-the-shelf toxicity detectors perform poorly on Software Engineering (SE) datasets, such as one curated from code review comments. To encounter this challenge, we present ToxiCR, a supervised learning-based toxicity identification tool for code review interactions. ToxiCR includes a choice to select one of the ten supervised learning algorithms, an option to select text vectorization techniques, eight preprocessing steps, and a large-scale labeled dataset of 19,571 code review comments. Two out of those eight preprocessing steps are SE domain specific. With our rigorous evaluation of the models with various combinations of preprocessing steps and vectorization techniques, we have identified the best combination for our dataset that boosts 95.8% accuracy and 88.9% F1 score. ToxiCR significantly outperforms existing toxicity detectors on our dataset. We have released our dataset, pre-trained models, evaluation results, and source code publicly available at: https://github.com/WSU-SEAL/ToxiCR
翻译:软件开发互动期间的有毒对话可能对自由和开放源码软件开发项目产生严重影响。例如,有毒对话的受害者可能害怕表达自己,因此失去动力,最终可能离开该项目。有毒对话的自动过滤可能帮助自由和开放源码软件社区保持其成员之间的健康互动。然而,在软件工程(SE)数据集上,现成的毒性检测器运行不良,例如从代码审查评论中整理出的一个版本。为了应对这一挑战,我们提出了托西CR,这是一个监管的基于学习的毒性识别工具,用于代码审查互动。托西CR包括选择10种监督的学习算法中的一个,选择文本传导技术的选项,8个预处理步骤,以及一个19,571个代码审查的大规模标签数据集。这8个预处理步骤中有2个是SE域特有的。我们用各种预处理步骤和病媒化技术组合对模型进行了严格的评估,我们确定了我们的数据集的最佳组合,即增强95.8%的精确度和88.9%的F1分。托西里CR 大大超越了我们现有的数据源码。