Modern machine learning (ML) models are expensive IP and business competitiveness often depends on keeping this IP confidential. This in turn restricts how these models are deployed; for example, it is unclear how to deploy a model on-device without inevitably leaking the underlying model. At the same time, confidential computing technologies such as multi-party computation or homomorphic encryption remain impractical for wide adoption. In this paper, we take a different approach and investigate the feasibility of ML-specific mechanisms that deter unauthorized model use by restricting the model to only be usable on specific hardware, making adoption on unauthorized hardware inconvenient. That way, even if IP is compromised, it cannot be trivially used without specialised hardware or major model adjustment. In a sense, we seek to enable cheap \emph{locking of machine learning models into specific hardware}. We demonstrate that \emph{locking} mechanisms are feasible by either targeting efficiency of model representations, making such models incompatible with quantization, or tying the model's operation to specific characteristics of hardware, such as the number of clock cycles for arithmetic operations. We demonstrate that locking comes with negligible overheads, while significantly restricting usability of the resultant model on unauthorized hardware.
翻译:暂无翻译