Explanation methods applied to sequential models for multivariate time series prediction are receiving more attention in machine learning literature. While current methods perform well at providing instance-wise explanations, they struggle to efficiently and accurately make attributions over long periods of time and with complex feature interactions. We propose WinIT, a framework for evaluating feature importance in time series prediction settings by quantifying the shift in predictive distribution over multiple instances in a windowed setting. Comprehensive empirical evidence shows our method improves on the previous state-of-the-art, FIT, by capturing temporal dependencies in feature importance. We also demonstrate how the solution improves the appropriate attribution of features within time steps, which existing interpretability methods often fail to do. We compare with baselines on simulated and real-world clinical data. WinIT achieves 2.47x better performance than FIT and other feature importance methods on real-world clinical MIMIC-mortality task. The code for this work is available at https://github.com/layer6ai-labs/WinIT.
翻译:在机器学习文献中,对多变时间序列预测的顺序模型应用的解释方法正受到更多的注意。虽然目前的方法在提供实例解释方面表现良好,但它们在长时间和复杂的特征互动中都难以有效和准确地确定属性。我们建议WinIT,这是评价时间序列预测设置的特征重要性的一个框架,通过量化在窗口环境中多种情况下预测分布的变化,在窗口式环境中对多个情况进行预测性分布。综合经验证据表明,我们的方法在以往最先进的FIT上有所改进,它捕捉了具有显著重要性的时际依赖性。我们还展示了解决方案如何改进时间步骤内特征的适当归属,而现有的可解释性方法往往无法做到这一点。我们比较了模拟和实际世界临床数据的基线。WinIT比FIT和现实世界临床MIMI-摩托性任务的其他特征重要方法的绩效要好2.47x。这项工作的代码可在https://github.com/lay6ai-labs/WinIT上查阅。