Model compression, such as pruning and quantization, has been widely applied to optimize neural networks on resource-limited classical devices. Recently, there are growing interest in variational quantum circuits (VQC), that is, a type of neural network on quantum computers (a.k.a., quantum neural networks). It is well known that the near-term quantum devices have high noise and limited resources (i.e., quantum bits, qubits); yet, how to compress quantum neural networks has not been thoroughly studied. One might think it is straightforward to apply the classical compression techniques to quantum scenarios. However, this paper reveals that there exist differences between the compression of quantum and classical neural networks. Based on our observations, we claim that the compilation/traspilation has to be involved in the compression process. On top of this, we propose the very first systematical framework, namely CompVQC, to compress quantum neural networks (QNNs).In CompVQC, the key component is a novel compression algorithm, which is based on the alternating direction method of multipliers (ADMM) approach. Experiments demonstrate the advantage of the CompVQC, reducing the circuit depth (almost over 2.5 %) with a negligible accuracy drop (<1%), which outperforms other competitors. Another promising truth is our CompVQC can indeed promote the robustness of the QNN on the near-term noisy quantum devices.
翻译:模型压缩,例如修剪和量子化,已被广泛用于优化资源有限的古典设备上的神经网络。最近,人们对量子变异电路(VQC)的兴趣日益浓厚,即量子计算机(a.k.a.a.a.)的神经网络。众所周知,近期量子装置的噪音和资源有限(即量子比特、qubits);然而,如何压缩量子神经网络(QNNNSs)还没有彻底研究。人们可能会认为将古典压缩技术应用到量子假设中是直截了当的。然而,本文显示量子和古典神经网络的压缩之间存在差异。根据我们的观察,我们声称编译/量网络必须参与压缩过程。除此之外,我们提议了第一个系统化框架,即CompVQ-C的精确度网络(Q-NBRM1)的精确度方法,也就是降低其他精确度的精确度(AMM)的精确度方法,其最接近QQ(ADMM)的精确度,其优势在于降低其他的精确度。