Self-supervised learning has recently emerged as a strong alternative in document analysis. These approaches are now capable of learning high-quality image representations and overcoming the limitations of supervised methods, which require a large amount of labeled data. However, these methods are unable to capture new knowledge in an incremental fashion, where data is presented to the model sequentially, which is closer to the realistic scenario. In this paper, we explore the potential of continual self-supervised learning to alleviate the catastrophic forgetting problem in handwritten text recognition, as an example of sequence recognition. Our method consists in adding intermediate layers called adapters for each task, and efficiently distilling knowledge from the previous model while learning the current task. Our proposed framework is efficient in both computation and memory complexity. To demonstrate its effectiveness, we evaluate our method by transferring the learned model to diverse text recognition downstream tasks, including Latin and non-Latin scripts. As far as we know, this is the first application of continual self-supervised learning for handwritten text recognition. We attain state-of-the-art performance on English, Italian and Russian scripts, whilst adding only a few parameters per task. The code and trained models will be publicly available.
翻译:自我监督的学习是文件分析中最近出现的一个强有力的替代方法。 这些方法现在能够学习高质量的图像表现,并克服监督方法的局限性,这些方法需要大量贴标签的数据。 但是,这些方法无法以渐进的方式获取新知识,即数据按顺序提交模型,更接近现实的情景。 在本文中,我们探索了持续自我监督学习的潜力,以缓解手写文本识别中灾难性的遗忘问题,作为顺序识别的范例。我们的方法包括为每项任务添加中间层,称为适应器,在学习当前任务的同时有效地从先前的模型中提取知识。我们提议的框架在计算和记忆复杂性方面都是有效的。为了展示其有效性,我们评估了我们的方法,将学习过的模型转移到多样化的文本识别下游任务,包括拉丁文和非拉丁文脚本。据我们所知,这是在手写文本识别中持续自我监督学习的第一个应用。我们在英语、意大利文和俄文脚本上取得最先进的表现,同时每个任务只增加几个参数。</s>