The Quantum Convolutional Neural Network (QCNN) is a quantum circuit model inspired by the architecture of Convolutional Neural Networks (CNNs). CNNs are successful because they do not need manual feature design and can learn high-level features from raw data. Neural Architecture Search (NAS) builds on this success by learning network architecture and achieves state-of-the-art performance. However, NAS requires the design of a search space, which is currently not possible for QCNNs as no formal framework exists to capture its design elements. In this work, we provide such a framework by using techniques from NAS to create a hierarchical representation for QCNN architectures. Using this framework, we generate a family of popular QCNNs, those resembling reverse binary trees. We then evaluate this family of models on a music genre classification dataset, GTZAN, showing that alternating architecture has a greater impact on model performance than other modelling components, such as the choice of unitary ansatz and data encoding. Our framework provides a way to improve model performance without increasing complexity and to jump around the cost landscape to avoid barren plateaus. Finally, we implement the framework as an open-source Python package to enable dynamic QCNN creation and facilitate QCNN search space design for NAS.
翻译:量子革命神经网络(QCNN)是受进进进神经网络(CNN)架构启发的量子电路模型。CNN之所以成功,是因为它们不需要手动特征设计,能够从原始数据中学习高层次特征。神经建筑搜索(NAS)以这一成功为基础,学习网络架构,并取得最先进的性能。然而,NAS需要设计一个搜索空间,因为没有正式框架来捕捉其设计要素,QCNN目前不可能。在这项工作中,我们通过使用NAS的技术为QCNN结构创建等级代表制提供了这样一个框架。利用这个框架,我们产生了一个受欢迎的QCNN系列,即那些重塑反向双树。然后我们评估了音乐基因分类数据集(GTZAN)的模型系列,表明交替结构对模型性能的影响大于其他模型组成部分,例如选择统一的卫星和数据编码。我们的框架提供了一种改进模型性能NNN网络绩效的方法,而不会增加复杂性,而能够跳动的QNN系列设计,我们最终可以使P-G-G-RO系统能够进行动态搜索。