Creating artificial intelligence (AI) systems capable of demonstrating lifelong learning is a fundamental challenge, and many approaches and metrics have been proposed to analyze algorithmic properties. However, for existing lifelong learning metrics, algorithmic contributions are confounded by task and scenario structure. To mitigate this issue, we introduce an algorithm-agnostic explainable surrogate-modeling approach to estimate latent properties of lifelong learning algorithms. We validate the approach for estimating these properties via experiments on synthetic data. To validate the structure of the surrogate model, we analyze real performance data from a collection of popular lifelong learning approaches and baselines adapted for lifelong classification and lifelong reinforcement learning.
翻译:创建能够显示终身学习的人工智能系统是一项基本挑战,已经提出了许多方法和衡量标准来分析算法特性,但是,对于现有的终身学习指标,算法贡献是由任务和情景结构混为一谈的。为了缓解这一问题,我们采用了一种算法-不可知解释的替代模型方法来估计终身学习算法的潜在特性。我们验证了通过合成数据实验来估计这些特性的方法。为了验证代孕模型的结构,我们从一套为终身分类和终身强化学习而调整的流行终身学习方法和基线中分析了真实的业绩数据。