Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data. However, a comprehensive comparison and analysis of limitations remains largely open due to the inaccessibility to suitable datasets. Empirical examination not only varies immensely between individual works, it further currently relies on contrived composition of benchmarks through subdivision and concatenation of various prevalent static vision datasets. In this work, our goal is to bridge this gap by introducing a computer graphics simulation framework that repeatedly renders only upcoming urban scene fragments in an endless real-time procedural world generation process. At its core lies a modular parametric generative model with adaptable generative factors. The latter can be used to flexibly compose data streams, which significantly facilitates a detailed analysis and allows for effortless investigation of various continual learning schemes.
翻译:为了减轻对非静止数据的深神经网络培训的灾难性干扰,提出了若干持续学习技术家庭的建议,以缓解对非静止数据的深度神经网络培训的灾难性干扰,然而,由于无法利用适当的数据集,对局限性的全面比较和分析在很大程度上仍然开放。经验性检查不仅在单个作品之间差异很大,而且目前还依靠通过细分和汇集各种流行的静态视觉数据集来形成基准。在这项工作中,我们的目标是通过引入计算机图形模拟框架来弥补这一差距,这种模拟框架一再使即将到来的城市场景在无休止的实时程序世界生成过程中产生碎片。其核心是模块式的参数基因模型,具有可变基因化因素,后者可以用来灵活地组合数据流,大大便利详细分析,并允许对各种持续学习计划进行不努力的调查。