Deep learning models have recently demonstrated remarkable results in a variety of tasks, which is why they are being increasingly applied in high-stake domains, such as industry, medicine, and finance. Considering that automatic predictions in these domains might have a substantial impact on the well-being of a person, as well as considerable financial and legal consequences to an individual or a company, all actions and decisions that result from applying these models have to be accountable. Given that a substantial amount of data that is collected in high-stake domains are in the form of time series, in this paper we examine the current state of eXplainable AI (XAI) methods with a focus on approaches for opening up deep learning black boxes for the task of time series classification. Finally, our contribution also aims at deriving promising directions for future work, to advance XAI for deep learning on time series data.
翻译:考虑到这些领域的自动预测可能对一个人的福祉产生巨大影响,并对个人或公司产生相当大的财政和法律后果,应用这些模型所产生的所有行动和决定都必须负责。鉴于在高占用领域收集的大量数据以时间序列的形式出现,我们在本文件中审视了目前eXive AI(XAI)方法的现状,重点是为时间序列分类任务开辟深层学习黑盒的方法。最后,我们的贡献还旨在为未来工作确定有希望的方向,推进XAI对时间序列数据的深入学习。