Multi-view time series classification (MVTSC) aims to improve the performance by fusing the distinctive temporal information from multiple views. Existing methods mainly focus on fusing multi-view information at an early stage, e.g., by learning a common feature subspace among multiple views. However, these early fusion methods may not fully exploit the unique temporal patterns of each view in complicated time series. Moreover, the label correlations of multiple views, which are critical to boost-ing, are usually under-explored for the MVTSC problem. To address the aforementioned issues, we propose a Correlative Channel-Aware Fusion (C2AF) network. First, C2AF extracts comprehensive and robust temporal patterns by a two-stream structured encoder for each view, and captures the intra-view and inter-view label correlations with a graph-based correlation matrix. Second, a channel-aware learnable fusion mechanism is implemented through convolutional neural networks to further explore the global correlative patterns. These two steps are trained end-to-end in the proposed C2AF network. Extensive experimental results on three real-world datasets demonstrate the superiority of our approach over the state-of-the-art methods. A detailed ablation study is also provided to show the effectiveness of each model component.
翻译:多视角时间序列分类(MVTSC)的目的是通过从多种观点中解密独特的时间信息来改善业绩,现有方法主要侧重于在早期阶段解密多视角信息,例如,通过在多种观点中学习一个共同特征子空间;然而,这些早期聚合方法可能没有在复杂的时间序列中充分利用每种观点独特的时间模式;此外,对推进至关重要的多种观点的标签相关性通常对MVTSC问题没有得到充分探讨;为了解决上述问题,我们建议建立一个Correlive Channel-Aware Fucion(C2AF)网络。首先,C2AF通过两层结构的编码器为每种观点提取全面而有力的时间模式,并捕捉各种观点内和视图间标签与基于图表的关联矩阵的关联关系。第二,通过革命神经网络实施一个频道认知可学习的聚合机制,以进一步探索全球关联模式。这两个步骤是在拟议的C2AF网络中经过培训的端对端端端端。关于三种观点的大规模实验性结果,也展示了三种真实世界方法的优势。