Face recognition has achieved significant progress in deep-learning era due to the ultra-large-scale and well-labeled datasets. However, training on ultra-large-scale datasets is time-consuming and takes up a lot of hardware resource. Therefore, how to design an appropriate training approach is very crucial and indispensable. The computational and hardware cost of training ultra-large-scale datasets mainly focuses on the Fully-Connected (FC) layer rather than convolutional layers. To this end, we propose a novel training approach for ultra-large-scale face datasets, termed Faster Face Classification (F$^2$C). In F$^2$C, we first define a Gallery Net and a Probe Net that are used to generate identities' centers and extract faces' features for face recognition, respectively. Gallery Net has the same structure as Probe Net and inherits the parameters from Probe Net with a moving average paradigm. After that, to reduce the training time and hardware resource occupancy of the FC layer, we propose the Dynamic Class Pool that stores the features from Gallery Net and calculates the inner product (logits) with positive samples (its identities appear in Dynamic Class Pool) in each mini-batch. Dynamic Class Pool can be regarded as a substitute for the FC layer and its size is much smaller than FC, which is the reason why Dynamic Class Pool can largely reduce the time and resource cost. For negative samples (its identities are not appear in the Dynamic Class Pool), we minimize the cosine similarities between negative samples and Dynamic Class Pool. Then, to improve the update efficiency and speed of Dynamic Class Pool's parameters, we design the Dual Loaders including Identity-based and Instance-based Loaders. Dual Loaders load images from given dataset by instances and identities to generate batches for training.
翻译:由于超大型和标签良好的数据集,在深层学习时代的面部识别工作取得了重大进展。然而,超大型数据集培训耗时耗时,占用了大量硬件资源。因此,如何设计适当的培训方法非常关键和不可或缺。培训超大型数据集的计算成本和硬件成本主要侧重于完全连接(FC)层,而不是卷变层。为此,我们提议对超大型面值数据集采用新的培训方法,称为“快脸分类”(F$2$C)。在F$2$C中,我们首先定义了美术网和Probe网络,分别用来生成身份中心并提取面部识别的面部特征。Gall Net的计算成本和硬件成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本主要侧重于全连接(FC),随后,为了减少FC层的培训时间和硬件资源占用率,我们提议了“动态库”存储库的功能,并计算了内部产品(IRC)的底值数据成本成本成本成本值,而Florevior 的底层的数值则被看成。