Multivariate time series classification (MTSC) is an important data mining task, which can be effectively solved by popular deep learning technology. Unfortunately, the existing deep learning-based methods neglect the hidden dependencies in different dimensions and also rarely consider the unique dynamic features of time series, which lack sufficient feature extraction capability to obtain satisfactory classification accuracy. To address this problem, we propose a novel temporal dynamic graph neural network (TodyNet) that can extract hidden spatio-temporal dependencies without undefined graph structure. It enables information flow among isolated but implicit interdependent variables and captures the associations between different time slots by dynamic graph mechanism, which further improves the classification performance of the model. Meanwhile, the hierarchical representations of graphs cannot be learned due to the limitation of GNNs. Thus, we also design a temporal graph pooling layer to obtain a global graph-level representation for graph learning with learnable temporal parameters. The dynamic graph, graph information propagation, and temporal convolution are jointly learned in an end-to-end framework. The experiments on 26 UEA benchmark datasets illustrate that the proposed TodyNet outperforms existing deep learning-based methods in the MTSC tasks.
翻译:多元时间序列分类 (MTSC) 是一项重要的数据挖掘任务,它可以通过流行的深度学习技术来有效地解决。然而,现有基于深度学习的方法忽略了不同维度中的隐藏依赖关系,并且很少考虑时间序列的动态特征,缺乏足够的特征提取能力以获得令人满意的分类准确性。为了解决这个问题,我们提出了一种新颖的基于动态图神经网络 (TodyNet) 的方法,可以在没有未定义的图结构的情况下提取隐藏的时空依赖关系。它使得孤立但相互依赖的变量之间的信息流动成为可能,并通过动态图机制捕捉不同时间段之间的关联,从而进一步提高模型的分类性能。同时,由于 GNN 的限制,无法学习图的分层表示。因此,我们还设计了一个时间图池化层来获取全局的图级表示,以便用可学习的时间参数进行图形学习。动态图、图信息传播和时间卷积在端到端的框架中联合学习。在26个 UEA 基准数据集上的实验表明,所提出的 TodyNet 在 MTSC 任务中优于现有的深度学习方法。