Recently, deep learning-based algorithms are widely adopted due to the advantage of being able to establish anomaly detection models without or with minimal domain knowledge of the task. Instead, to train the artificial neural network more stable, it should be better to define the appropriate neural network structure or the loss function. For the training anomaly detection model, the mean squared error (MSE) function is adopted widely. On the other hand, the novel loss function, logarithmic mean squared error (LMSE), is proposed in this paper to train the neural network more stable. This study covers a variety of comparisons from mathematical comparisons, visualization in the differential domain for backpropagation, loss convergence in the training process, and anomaly detection performance. In an overall view, LMSE is superior to the existing MSE function in terms of strongness of loss convergence, anomaly detection performance. The LMSE function is expected to be applicable for training not only the anomaly detection model but also the general generative neural network.
翻译:近年來,由于其能夠在很少或者沒有領域知識的情況下建立異常檢測模型的優勢,基于深度學習的算法得到了廣泛采用。然而,為了使人工神經網絡更穩定地訓練,使用適當的神經網絡結構或損失函數應該是更好的方法。目前,均方誤差(MSE)函數被廣泛采用來訓練異常檢測模型。本文提出了一種新的損失函數——對數均方誤差(LMSE)函數,以實現更穩定的神經網絡訓練。本研究涵蓋了從數學比較、反向傳播微分域中的可視化、訓練過程中的損失收斂以及異常檢測性能的各種比較。總體而言,LMSE在損失收斂性、異常檢測性能等方面優于現有的MSE函數。LMSE函數有望應用于訓練異常檢測模型以及通用生成神經網絡。