In the era of noisy intermediate scale quantum devices, variational quantum circuits (VQCs) are currently one of the main strategies for building quantum machine learning models. These models are made up of a quantum part and a classical part. The quantum part is given by a parametrization $U$, which, in general, is obtained from the product of different quantum gates. By its turn, the classical part corresponds to an optimizer that updates the parameters of $U$ in order to minimize a cost function $C$. However, despite the many applications of VQCs, there are still questions to be answered, such as for example: What is the best sequence of gates to be used? How to optimize their parameters? Which cost function to use? How the architecture of the quantum chips influences the final results? In this article, we focus on answering the last question. We will show that, in general, the cost function will tend to a typical average value the closer the parameterization used is from a $2$-design. Therefore, the closer this parameterization is to a $2$-design, the less the result of the quantum neural network model will depend on its parametrization. As a consequence, we can use the own architecture of the quantum chips to defined the VQC parametrization, avoiding the use of additional swap gates and thus diminishing the VQC depth and the associated errors.
翻译:在噪音的中间比例量子装置时代,变量电路(VQC)目前是建立量子机器学习模型的主要战略之一。这些模型由量子部分和经典部分组成。量子部分由美化美元提供,通常来自不同量子门的产物。古典部分则对应一个优化器,更新美元参数,以尽量减少成本函数$C。然而,尽管VQC有许多应用,但仍有一些问题需要解答,例如:使用门的最佳序列是什么?如何优化参数?使用哪个成本函数?质子结构如何影响最终结果?在本篇文章中,我们集中回答最后一个问题。我们将表明,一般而言,成本功能倾向于一个典型的平均值,即使用更接近于2美元的参数值。因此,这种参数化越接近于2美元设计,就越不至于使用自己的量子神经网络的深度;如何优化其参数模型将如何影响最终结果?因此,我们集中回答最后一个问题。我们将会表明,总体而言,成本功能将倾向于一个典型的平均值,即所用参数值越近于2美元设计。因此,这种参数化越接近于2美元设计,其自身的深度差值,其内部的深度将减少的深度网络模型将如何使用。 QC将决定如何使用。因此,如何使用其节流的硬质变换的硬化结构结构结构结构将决定。