Existing works on anomaly detection (AD) rely on clean labels from human annotators that are expensive to acquire in practice. In this work, we propose a method to leverage weak/noisy labels (e.g., risk scores generated by machine rules for detecting malware) that are cheaper to obtain for anomaly detection. Specifically, we propose ADMoE, the first framework for anomaly detection algorithms to learn from noisy labels. In a nutshell, ADMoE leverages mixture-of-experts (MoE) architecture to encourage specialized and scalable learning from multiple noisy sources. It captures the similarities among noisy labels by sharing most model parameters, while encouraging specialization by building "expert" sub-networks. To further juice out the signals from noisy labels, ADMoE uses them as input features to facilitate expert learning. Extensive results on eight datasets (including a proprietary enterprise security dataset) demonstrate the effectiveness of ADMoE, where it brings up to 34% performance improvement over not using it. Also, it outperforms a total of 13 leading baselines with equivalent network parameters and FLOPS. Notably, ADMoE is model-agnostic to enable any neural network-based detection methods to handle noisy labels, where we showcase its results on both multiple-layer perceptron (MLP) and the leading AD method DeepSAD.
翻译:异常点检测( AD) 现有工作依赖于来自人类标识器的清洁标签,这些标签在实践上成本很高。 在这项工作中,我们提出一种方法来利用弱/噪音标签(例如,通过机器规则发现恶意软件所产生的风险评分),价格更低,以获得异常点检测。 具体地说,我们提议ADMoE,这是第一个从噪音标签中学习异常点检测算法的框架,ADMoE,这是第一个从异常点检测算法中学习。 简而言之,ADMoE利用专家混合结构鼓励从多个噪音源中进行专门和可缩放的学习。它通过分享大多数模型参数来捕捉噪音标签之间的相似之处,同时鼓励通过建立“专家”子网络进行专业化。为了进一步从噪音标签中提取信号,ADMOE使用它们作为专家学习的输入特征。 在八套数据集( 包括一个专有企业安全数据集) 的广泛结果中,ADMoE( ) 展示了ADMoE( 34% ) 的绩效改进率, 也超越了13个领先基线, 其相应的网络参数参数和FLOPS( FLOPS) 的深度检测。 。,ADMoE) 其演示方法使得我们能以每部 的模型处理任何 ADADADMLADRAD- RADLADRAD- RUD- RAD- RUD- RUD- RUD- RW 方法。