Face recognition has achieved significant progress in deep-learning era due to the ultra-large-scale and well-labeled datasets. However, training on ultra-large-scale datasets is time-consuming and takes up a lot of hardware resource. Therefore, designing an effective and efficient training approach is very crucial and indispensable. The heavy computational and memory costs mainly result from the high dimentionality of the Fully-Connected (FC) layer. Specifically, the dimensionality is determined by the number of face identities, which can be million-level or even more. To this end, we propose a novel training approach for ultra-large-scale face datasets, termed Faster Face Classification (F$^2$C). In F$^2$C, we first define a Gallery Net and a Probe Net that are used to generate identities' centers and extract faces' features for face recognition, respectively. Gallery Net has the same structure as Probe Net and inherits the parameters from Probe Net with a moving average paradigm. After that, to reduce the training time and hardware costs of the FC layer, we propose a Dynamic Class Pool (DCP) that stores the features from Gallery Net and calculates the inner product (logits) with positive samples (whose identities are in the DCP) in each mini-batch. DCP can be regarded as a substitute for the FC layer but it is far smaller, greatly reducing the computational and memory costs. For negative samples (whose identities are not in DCP), we minimize the cosine similarities between negative samples and those in DCP. Then, to improve the update efficiency and speed of DCP's parameters, we design the Dual Loaders including Identity-based and Instance-based Loaders to load identities and instances to generate training batches.
翻译:由于超大型和标签良好的数据集,在深层学习时代的面部识别取得了显著进展。然而,超大型数据集培训耗时,占用了大量硬件资源。因此,设计高效高效的培训方法非常关键和不可或缺。计算和记忆成本高昂,主要是因为全连接(FC)层的高度分化。具体地说,维度取决于面体身份的数量,这可以是百万或更多。为此,我们提议对超大型数据集的培训方法,称为“快脸分类”(F2美元C),使用超大型数据集的培训方法。因此,设计一个高效高效的培训方法非常关键和不可或缺。主要由于全连接(FC)层的高度分化,计算成本和存储成本,而计算全齐面(Probe Net)的结构与Probe Net的参数相同,从Probe Net的基底值中得出平均模式。此后,为了减少FC层的培训和硬件成本,我们提议对超大型面值数据级数据集进行更新,我们用SDFC值(DOL)的每部的内值和底层的底层的底层的底层的底层数据(我们用SDL),我们用SDOL的底的内的内的底数据模型来计算。我们用SDOL的基的内,我们用SOL的底的基的基的基的基的基的基的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的底的基的基的底的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基的基