New knowledge originates from the old. The various types of elements, deposited in the training history, are a large amount of wealth for improving learning deep models. In this survey, we comprehensively review and summarize the topic--``Historical Learning: Learning Models with Learning History'', which learns better neural models with the help of their learning history during its optimization, from three detailed aspects: Historical Type (what), Functional Part (where) and Storage Form (how). To our best knowledge, it is the first survey that systematically studies the methodologies which make use of various historical statistics when training deep neural networks. The discussions with related topics like recurrent/memory networks, ensemble learning, and reinforcement learning are demonstrated. We also expose future challenges of this topic and encourage the community to pay attention to the think of historical learning principles when designing algorithms. The paper list related to historical learning is available at \url{https://github.com/Martinser/Awesome-Historical-Learning.}
翻译:新知识起源于旧有的知识。训练历史中所积累的各种元素对于改进学习深度模型非常重要。在本综述中,我们从三个详细方面:历史类型(what)、功能部分(where)和存储形式(how),全面回顾和总结题目为“历史学习综述:具有学习历史的学习模型”的主题。这个主题通过优化学习历史,能够帮助更好地训练神经模型。据我们所知,这是第一篇系统地研究在训练深度神经网络时使用各种历史统计数据的方法论的综述。文章还探讨了与历史学习相关的主题,例如递归/记忆网络、集成学习和强化学习。同时,我们也会透露出这个主题的未来挑战,并鼓励社区在设计算法时考虑历史学习原则。有关历史学习的论文列表可在以下网址找到: \url{https://github.com/Martinser/Awesome-Historical-Learning.}