Continual learning can enable neural networks to evolve by learning new tasks sequentially in task-changing scenarios. However, two general and related challenges should be overcome in further research before we apply this technique to real-world applications. Firstly, newly collected novelties from the data stream in applications could contain anomalies that are meaningless for continual learning. Instead of viewing them as a new task for updating, we have to filter out such anomalies to reduce the disturbance of extremely high-entropy data for the progression of convergence. Secondly, fewer efforts have been put into research regarding the explainability of continual learning, which leads to a lack of transparency and credibility of the updated neural networks. Elaborated explanations about the process and result of continual learning can help experts in judgment and making decisions. Therefore, we propose the conceptual design of an explainability module with experts in the loop based on techniques, such as dimension reduction, visualization, and evaluation strategies. This work aims to overcome the mentioned challenges by sufficiently explaining and visualizing the identified anomalies and the updated neural network. With the help of this module, experts can be more confident in decision-making regarding anomaly filtering, dynamic adjustment of hyperparameters, data backup, etc.
翻译:持续学习可以使神经网络通过在任务变化的情景中相继学习新的任务而演变。然而,在将这一技术应用于现实世界应用之前,在进一步研究中应该克服两个一般性和相关的挑战。首先,从应用中数据流中新收集的新颖之处可能含有对持续学习毫无意义的异常现象。我们不应将之视为更新的新任务,而应过滤这些异常现象,以减少极端高热带数据对趋同进展的干扰。其次,在研究持续学习的可解释性方面所作的努力较少,这种研究导致更新的神经网络缺乏透明度和可信度。对不断学习的过程和结果作出解释有助于专家判断和作出决定。因此,我们提议根据减少尺寸、视觉化和评价战略等技术,与专家在循环中设计一个解释性模块的概念设计。这项工作的目的是通过充分解释和直观所查明的异常现象和最新神经网络来克服上述挑战。在这个模块的帮助下,专家们可以在异常过滤、动态调整超光谱仪、数据备份等决策方面更加自信。