This work proposes a model for continual learning on tasks involving temporal sequences, specifically, human motions. It improves on a recently proposed brain-inspired replay model (BI-R) by building a biologically-inspired conditional temporal variational autoencoder (BI-CTVAE), which instantiates a latent mixture-of-Gaussians for class representation. We investigate a novel continual-learning-to-generate (CL2Gen) scenario where the model generates motion sequences of different classes. The generative accuracy of the model is tested over a set of tasks. The final classification accuracy of BI-CTVAE on a human motion dataset after sequentially learning all action classes is 78%, which is 63% higher than using no-replay, and only 5.4% lower than a state-of-the-art offline trained GRU model.
翻译:这项工作提出了持续学习涉及时间序列的任务的模式, 具体来说, 人类运动。 它改进了最近提议的大脑启发回放模型( BI- R), 其方法是建立一个由生物启发的有条件时间变异自动编码器( BI- CTVAE), 即时为课堂代表提供一种潜在的Gaubi- Guussian混合体( BI- CTVAE ) 。 我们调查了一种新型的连续学习到基因( CL2Gen) 情景, 该模型生成了不同类别的运动序列。 模型的基因精度通过一组任务测试。 在连续学习所有动作类后, BI- CTVAE 在人类运动数据集上的最后分类精确度为 78 %, 比不重复游戏高63%, 仅比经过培训的离线GRU模型低5.4% 。</s>