Classification of malware families is crucial for a comprehensive understanding of how they can infect devices, computers, or systems. Thus, malware identification enables security researchers and incident responders to take precautions against malware and accelerate mitigation. API call sequences made by malware are widely utilized features by machine and deep learning models for malware classification as these sequences represent the behavior of malware. However, traditional machine and deep learning models remain incapable of capturing sequence relationships between API calls. On the other hand, the transformer-based models process sequences as a whole and learn relationships between API calls due to multi-head attention mechanisms and positional embeddings. Our experiments demonstrate that the transformer model with one transformer block layer surpassed the widely used base architecture, LSTM. Moreover, BERT or CANINE, pre-trained transformer models, outperformed in classifying highly imbalanced malware families according to evaluation metrics, F1-score, and AUC score. Furthermore, the proposed bagging-based random transformer forest (RTF), an ensemble of BERT or CANINE, has reached the state-of-the-art evaluation scores on three out of four datasets, particularly state-of-the-art F1-score of 0.6149 on one of the commonly used benchmark dataset.
翻译:恶意软件家庭分类对于全面了解它们如何感染设备、计算机或系统至关重要。因此,恶意软件识别使安全研究人员和事件应对者能够对恶意软件采取预防措施并加速缓解。恶意软件的API呼叫序列被机器和深学习模型广泛用于恶意软件分类,因为这些序列代表恶意软件的行为。然而,传统的机器和深层学习模型仍然无法捕捉API电话之间的序列关系。另一方面,基于变压器的模型程序序列作为一个整体,并学习API电话与多头关注机制和定位嵌入装置的关系。我们的实验表明,一个变压器区层的变压器模型超过了广泛使用的基础结构LSTM。此外,BERT或CANNINE, 预先培训的变压器模型,在根据评价指标、F1核心和AUC的评分对高度失衡的恶意软件家庭进行分类方面表现得不及格。此外,拟议的基于变压器的随机变压器森林(RTFFF),即BERT或CANINE的组合,已经达到一个变压式变压器式变压式变压器的变压器模型超过广泛使用的基本结构的三种标准。