Software-intensive systems produce logs for troubleshooting purposes. Recently, many deep learning models have been proposed to automatically detect system anomalies based on log data. These models typically claim very high detection accuracy. For example, most models report an F-measure greater than 0.9 on the commonly-used HDFS dataset. To achieve a profound understanding of how far we are from solving the problem of log-based anomaly detection, in this paper, we conduct an in-depth analysis of five state-of-the-art deep learning-based models for detecting system anomalies on four public log datasets. Our experiments focus on several aspects of model evaluation, including training data selection, data grouping, class distribution, data noise, and early detection ability. Our results point out that all these aspects have significant impact on the evaluation, and that all the studied models do not always work well. The problem of log-based anomaly detection has not been solved yet. Based on our findings, we also suggest possible future work.
翻译:软件密集的系统为排除故障目的生成日志。 最近, 许多深层学习模型被推荐用于自动检测基于日志数据的系统异常。 这些模型通常声称检测准确性非常高。 例如,大多数模型在常用的HDFS数据集上报告F措施大于0.9。 为了深入了解我们离解决日志异常探测问题有多远,我们在本文件中深入分析了五个最先进的深层学习模型,以探测四个公共日志数据集中的系统异常。我们的实验侧重于模型评估的几个方面,包括培训数据选择、数据分组、分类分布、数据噪音和早期检测能力。我们的结果指出,所有这些方面都对评估有重大影响,而且所有研究模型并非总能很好地发挥作用。基于日志的异常探测问题尚未得到解决。根据我们的调查结果,我们还建议今后可能开展的工作。