In this paper, we present a set of extremely efficient and high throughput models for accurate face verification, MixFaceNets which are inspired by Mixed Depthwise Convolutional Kernels. Extensive experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets for applications requiring extremely low computational complexity. Under the same level of computation complexity (< 500M FLOPs), our MixFaceNets outperform MobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW, 97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (at FAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models, while using significantly fewer FLOPs and less computation overhead, which proves the practical value of our proposed MixFaceNets. All training codes, pre-trained models, and training logs have been made available https://github.com/fdbtrs/mixfacenets.
翻译:在本文中,我们展示了一套非常高效和高输送量的准确面部核查模型,即混合深度进化核心内核所启发的MixFaceNet。对野生标签面(LFW)、A-DB、Megaface和IARPA Janus Basits IJB-B和IJB-C数据集的广泛实验评价显示,我们的MixFaceNet在需要极低计算复杂性的应用方面是有效的。在相同的计算复杂性( < 500M FLOPs)下,我们的MixFaceNet在所有评估数据集上优于移动FaceNet,实现了LFW99.60%的精确度,AgeDB-30、93.60 TAR(FAR1e-6)在MegaFace、90.94 TAR(FAR1e-4)在IJB-B和93.08 TAR(FAR1f4)在IJB-C模型(500MLOPs)上,我们的MixFaceNet网络计算复杂性在500MFOPs和1G FLOPs之间, 我们的MixFaceNet网络的计算模型的计算模型和计算模型的计算模型在使用最低的计算中,所有的计算结果和最难的计算中, 和最难的计算中, 和最难的计算得到的计算。