Deep learning models utilizing convolution layers have achieved state-of-the-art performance on univariate time series classification tasks. In this work, we propose improving CNN based time series classifiers by utilizing Octave Convolutions (OctConv) to outperform themselves. These network architectures include Fully Convolutional Networks (FCN), Residual Neural Networks (ResNets), LSTM-Fully Convolutional Networks (LSTM-FCN), and Attention LSTM-Fully Convolutional Networks (ALSTM-FCN). The proposed layers significantly improve each of these models with minimally increased network parameters. In this paper, we experimentally show that by substituting convolutions with OctConv, we significantly improve accuracy for time series classification tasks for most of the benchmark datasets. In addition, the updated ALSTM-OctFCN performs statistically the same as the top two time series classifers, TS-CHIEF and HIVE-COTE (both ensemble models). To further explore the impact of the OctConv layers, we perform ablation tests of the augmented model compared to their base model.
翻译:在这项工作中,我们建议利用OctConv(OctConv)来改善CNN基于时间序列的分类系统,使其优于本身。这些网络结构包括全革命网络、残余神经网络(ResNets)、LSTM-Fulli Convolution Networks(LSTM-FCN)、LSTM-Fully Convolutional Networks(ALSTM-FCN),以及注意LSTM-Fully Convolial Networks(ALSTM-FCN)等最先进的业绩。拟议的层次以最低限度的网络参数大大改进了这些模型中的每一个模型。在本文中,我们实验性地表明,通过取代Oct Convolutions(OctConv),我们大大提高了大多数基准数据集的时间序列分类任务的准确性。此外,更新的ALSTM-OctFCN在统计上与前两个时间序列(TS-CHIE和HIVE-COTE(两者都是组合模型)相同。为了进一步探索10Convolent 模型的影响,我们进行了基础测试,以将其放大为基准比对模型进行比较。