Despite the high performance of neural network-based time series forecasting methods, the inherent challenge in explaining their predictions has limited their applicability in certain application areas. Due to the difficulty in identifying causal relationships between the input and output of such black-box methods, they rarely have been adopted in domains such as legal and medical fields in which the reliability and interpretability of the results can be essential. In this paper, we propose \model, a novel deep learning-based probabilistic time series forecasting architecture that is intrinsically interpretable. We conduct experiments with multiple datasets and performance metrics and empirically show that our model is not only interpretable but also provides comparable performance to state-of-the-art probabilistic time series forecasting methods. Furthermore, we demonstrate that interpreting the parameters of the stochastic processes of interest can provide useful insights into several application areas.
翻译:尽管以神经网络为基础的时间序列预测方法表现良好,但解释其预测的固有挑战限制了其在某些应用领域的适用性。由于难以确定这种黑箱方法的投入和产出之间的因果关系,很少在法律和医疗领域采用这些方法,在这些领域,结果的可靠性和可解释性可能至关重要。在本文件中,我们提出一个模型,这是一个新的基于学习的、内在可解释的、基于深层次概率时间序列预测结构。我们用多个数据集和性能衡量尺度进行实验,并用经验证明我们的模型不仅可以解释,而且还提供与最新水平的概率时间序列预测方法相类似的性能。此外,我们证明,解释有关利害关系的随机过程参数可以提供对几个应用领域的有用见解。