Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms. However, we notice that quantized NNs are as vulnerable to adversarial attacks as the full-precision models. With the proliferation of neural networks on small devices that we carry or surround us, there is a need for efficient models without sacrificing trust in the prediction in presence of malign perturbations. Current mitigation approaches often need adversarial training or are bypassed when the strength of adversarial examples is increased. In this work, we investigate how a probabilistic framework would assist in overcoming the aforementioned limitations for quantized deep learning models. We explore Stochastic-Shield: a flexible defense mechanism that leverages input filtering and a probabilistic deep learning approach materialized via Monte Carlo Dropout. We show that it is possible to jointly achieve efficiency and robustness by accurately enabling each module without the burden of re-retraining or ad hoc fine-tuning.
翻译:量化神经网络(NN)是高效率地在小型硬件平台上部署深层学习模型的共同标准。然而,我们注意到,量化的无核武器国家与完全精准模型一样容易受到对抗性攻击。随着我们携带或环绕的小型装置上神经网络的扩散,需要高效模型,而不必在出现不适扰的情况下牺牲对预测的信任。当前的缓解方法往往需要对抗性培训,或在对抗性实例强度增加时被绕过。在这项工作中,我们研究一个概率框架如何有助于克服上述四分制深层学习模型的局限性。我们探索Stochatic-Shield:一个灵活的防御机制,利用输入过滤器和通过蒙特卡洛漏网实现的概率性深层学习方法。我们表明,通过准确使每个模块能够实现效率和稳健,而无需再培训或临时微调的负担。