Deep neural networks have rapidly become the mainstream method for face recognition. However, deploying such models that contain an extremely large number of parameters to embedded devices or in application scenarios with limited memory footprint is challenging. In this work, we present an extremely lightweight and accurate face recognition solution. We utilize neural architecture search to develop a new family of face recognition models, namely PocketNet. We also propose to enhance the verification performance of the compact model by presenting a novel training paradigm based on knowledge distillation, namely the multi-step knowledge distillation. We present an extensive experimental evaluation and comparisons with the recent compact face recognition models on nine different benchmarks including large-scale evaluation benchmarks such as IJB-B, IJB-C, and MegaFace. PocketNets have consistently advanced the state-of-the-art (SOTA) face recognition performance on nine mainstream benchmarks when considering the same level of model compactness. With 0.92M parameters, our smallest network PocketNetS-128 achieved very competitive results to recent SOTA compacted models that contain more than 4M parameters. Training codes and pre-trained models are publicly released https://github.com/fdbtrs/PocketNet.
翻译:深心神经网络迅速成为面部识别的主流方法。然而,在嵌入装置或记忆足迹有限的应用情景中部署含有数量极多参数的模型是具有挑战性的。在这项工作中,我们提出了一个极轻和准确的面部识别解决方案。我们利用神经结构搜索开发了面部识别模型的新体系,即PocketNet。我们还提议通过提供基于知识蒸馏的新颖培训模式,即多步骤知识蒸馏,提高契约模型的核查性能。我们提出了广泛的实验性评价和比较,与最近九种不同基准的面部识别模型进行了广泛的压缩式识别模型,包括IJB-B、IJB-C和MegaFace等大型评估基准。PocketNets在考虑同样程度的模缩缩时,始终在九种主流基准上不断提升最新水平的识别性能。我们最小的网络PocketNetS-128的参数,在含有4M参数的最近SOTA压缩模型上取得了非常有竞争力的结果。培训代码和预先培训的模型被公开发布 https://gifs/ftrs.