Many applications require high accuracy of neural networks as well as low latency and user data privacy guaranty. Face anti-spoofing is one of such tasks. However, a single model might not give the best results for different device performance categories, while training multiple models is time consuming. In this work we present Post-Train Adaptive (PTA) block. Such a block is simple in structure and offers a drop-in replacement for the MobileNetV2 Inverted Residual block. The PTA block has multiple branches with different computation costs. The branch to execute can be selected on-demand and at runtime; thus, offering different inference times and configuration capability for multiple device tiers. Crucially, the model is trained once and can be easily reconfigured after training, even directly on a mobile device. In addition, the proposed approach shows substantially better overall performance in comparison to the original MobileNetV2 as tested on CelebA-Spoof dataset. Different PTA block configurations are sampled at training time, which also decreases overall wall-clock time needed to train the model. While we present computational results for the anti-spoofing problem, the MobileNetV2 with PTA blocks is applicable to any problem solvable with convolutional neural networks, which makes the results presented practically significant.
翻译:许多应用程序都需要神经网络的高度精度以及低潜值和用户数据隐私保障。 面部防伪是其中一项任务。 但是, 单一模型可能不会给不同设备性能类别带来最佳效果, 而培训多种模型耗费时间。 在此工作中, 我们展示了“ 培训适应性( PTA) ” 区块。 这个区块结构简单, 提供了移动NetV2 反倒残留区块的倒置替换。 PTA 区块有多个分支, 计算成本不同。 执行的分支可以按需和运行时选择; 因此, 为多个设备级提供不同的推算时间和配置能力。 关键是, 模型经过一次培训后很容易重新配置, 甚至直接在移动设备上进行。 此外, 与CelebA- Spoofal数据集测试的原移动网络2 相比, 其总体性能要大得多得多。 不同的 PTA区块配置在培训时会抽样, 培训时也会减少整个墙时段的时间; 因此, 提供多个设备级级的计算结果, 与任何可应用的移动网络都具有可操作性的问题。