Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data. Their robustness as general approximators has been shown in a wide variety of data sources, with applications on image, sound, and 3D scene representation. However, little attention has been given to leveraging these architectures for the representation and analysis of time series data. In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed. We show how these networks can be leveraged for the imputation of time series, with applications on both univariate and multivariate data. Finally, we propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset. We introduce an FFT-based loss to guide training so that all frequencies are preserved in the time series. We show that this network can be used to encode time series as INRs, and their embeddings can be interpolated to generate new time series from existing ones. We evaluate our generative method by using it for data augmentation, and show that it is competitive against current state-of-the-art approaches for augmentation of time series.
翻译:最近出现了一个强大的工具,它提供了精确和分辨率独立的数据编码。它们作为一般近似数据的稳健性体现在各种数据源中,在图像、声音和3D场景显示上的应用。然而,很少注意利用这些结构来代表和分析时间序列数据。在本文件中,我们用IRS分析时间序列的表示方式,比较重建精确度和培训趋同速度方面不同的启动功能。我们展示了如何利用这些网络来计算时间序列的计算,同时在单独和多变量数据上的应用。最后,我们提出了一个超网络结构,利用IRS来学习整个时间序列数据集的压缩潜层表示方式。我们采用了基于FFFFT的损失来指导培训,以便在时间序列中保留所有频率。我们表明,这个网络可以用来将时间序列编码为IRS,而它们的嵌入可以用来从现有数据序列中生成新的时间序列。我们用它来评估我们当前升级方法的基因化时间序列,用来进行数据增强和显示数据升级。