Securing neural networks (NNs) against model extraction and parameter exfiltration attacks is an important problem primarily because modern NNs take a lot of time and resources to build and train. We observe that there are no countermeasures (CMs) against recently proposed attacks on sparse NNs and there is no single CM that effectively protects against all types of known attacks for both sparse as well as dense NNs. In this paper, we propose SparseLock, a comprehensive CM that protects against all types of attacks including some of the very recently proposed ones for which no CM exists as of today. We rely on a novel compression algorithm and binning strategy. Our security guarantees are based on the inherent hardness of bin packing and inverse bin packing problems. We also perform a battery of statistical and information theory based tests to successfully show that we leak very little information and side channels in our architecture are akin to random sources. In addition, we show a performance benefit of 47.13% over the nearest competing secure architecture.
翻译:暂无翻译