Privacy and memory are two recurring themes in a broad conversation about the societal impact of AI. These concerns arise from the need for huge amounts of data to train deep neural networks. A promise of Generalized Few-shot Object Detection (G-FSOD), a learning paradigm in AI, is to alleviate the need for collecting abundant training samples of novel classes we wish to detect by leveraging prior knowledge from old classes (i.e., base classes). G-FSOD strives to learn these novel classes while alleviating catastrophic forgetting of the base classes. However, existing approaches assume that the base images are accessible, an assumption that does not hold when sharing and storing data is problematic. In this work, we propose the first data-free knowledge distillation (DFKD) approach for G-FSOD that leverages the statistics of the region of interest (RoI) features from the base model to forge instance-level features without accessing the base images. Our contribution is three-fold: (1) we design a standalone lightweight generator with (2) class-wise heads (3) to generate and replay diverse instance-level base features to the RoI head while finetuning on the novel data. This stands in contrast to standard DFKD approaches in image classification, which invert the entire network to generate base images. Moreover, we make careful design choices in the novel finetuning pipeline to regularize the model. We show that our approach can dramatically reduce the base memory requirements, all while setting a new standard for G-FSOD on the challenging MS-COCO and PASCAL-VOC benchmarks.
翻译:隐私和记忆是有关AI的社会影响的广泛对话中反复出现的两个主题。这些关切源于需要大量数据来训练深神经网络。AI的学习范式“通用的少射物体探测”(G-FSOD)有望减轻通过利用旧类(即基础类)的先前知识收集我们希望检测的新课程的大量培训样本的需要。G-FSOD努力学习这些新课程,同时减轻对基础类的灾难性遗忘。然而,现有办法假定基础图像是可以获取的,这种假设在分享和储存数据时是无法维持的。在这项工作中,我们提议G-FSOD的第一个无数据知识蒸馏(DFKD)方法(DFKD ) 用于G-FSOD的首次无数据蒸馏(DFKD ) 方法, 利用基础模型中感兴趣的区域(RoI) 特征的统计来形成实例级特征,而无需访问基础图像。我们的贡献是三重:(1) 我们设计一个独立的轻型轻度发电机发电机,用(2) 级头领(3) 来生成和重塑的ROCO 基本特征特征特征特征,同时对新数据进行微调整。这与我们标准的GVDFS 标准图像的升级的网络相比,我们可以形成一个标准的标准化的模型。</s>