In this paper, we generalize the problem of single-index model to the context of continual learning in which a learner is challenged with a sequence of tasks one by one and the dataset of each task is revealed in an online fashion. We propose a randomized strategy that is able to learn a common single-index (meta-parameter) for all tasks and a specific link function for each task. The common single-index allows to transfer the information gained from the previous tasks to a new one. We provide a rigorous theoretical analysis of our proposed strategy by proving some regret bounds under different assumption on the loss function.
翻译:在本文中,我们将单一指数模式的问题概括到不断学习的背景下,即学习者面临一系列任务的挑战,每个任务的数据集都以在线方式披露。我们提出了一个随机化的战略,能够对所有任务学习一个共同的单一指数(meta参数),为每项任务学习一个具体的链接功能。共同的单一指数可以将从以往任务中获得的信息转移到一个新的任务。我们对拟议战略进行严格的理论分析,在对损失功能的不同假设下证明一些遗憾界限。