In this research, we use user defined labels from three internet text sources (Reddit, Stackexchange, Arxiv) to train 21 different machine learning models for the topic classification task of detecting cybersecurity discussions in natural text. We analyze the false positive and false negative rates of each of the 21 model's in a cross validation experiment. Then we present a Cybersecurity Topic Classification (CTC) tool, which takes the majority vote of the 21 trained machine learning models as the decision mechanism for detecting cybersecurity related text. We also show that the majority vote mechanism of the CTC tool provides lower false negative and false positive rates on average than any of the 21 individual models. We show that the CTC tool is scalable to the hundreds of thousands of documents with a wall clock time on the order of hours.
翻译:在这一研究中,我们使用三个因特网文本来源(Reddit、Stackschange、Arxiv)的用户定义标签,培训21个不同的机器学习模式,以进行自然文本中检测网络安全讨论的专题分类任务;在交叉验证试验中分析21个模型中每个模型的虚假正反率和假负率;然后我们提出一个网络安全主题分类工具,21个经过培训的机器学习模型的多数投票人投票作为检测网络安全相关文本的决定机制;我们还表明,反恐委员会工具的多数投票机制平均提供低于21个单个模型中的任何一个的虚假负正率和假正率;我们显示,反恐委员会工具可按小时的顺序向几十万个文件缩放。