Quantum Machine Learning (QML) models are aimed at learning from data encoded in quantum states. Recently, it has been shown that models with little to no inductive biases (i.e., with no assumptions about the problem embedded in the model) are likely to have trainability and generalization issues, especially for large problem sizes. As such, it is fundamental to develop schemes that encode as much information as available about the problem at hand. In this work we present a simple, yet powerful, framework where the underlying invariances in the data are used to build QML models that, by construction, respect those symmetries. These so-called group-invariant models produce outputs that remain invariant under the action of any element of the symmetry group $\mathfrak{G}$ associated to the dataset. We present theoretical results underpinning the design of $\mathfrak{G}$-invariant models, and exemplify their application through several paradigmatic QML classification tasks including cases when $\mathfrak{G}$ is a continuous Lie group and also when it is a discrete symmetry group. Notably, our framework allows us to recover, in an elegant way, several well known algorithms for the literature, as well as to discover new ones. Taken together, we expect that our results will help pave the way towards a more geometric and group-theoretic approach to QML model design.
翻译:量子机器学习( QML) 模型旨在从量子状态中编码的数据中学习。 最近, 已经显示, 几乎没有或没有暗示偏差的模型( 即对模型中的问题没有假设) 很可能具有可培训和概括性问题, 特别是对于大问题大小而言。 因此, 有必要制定各种计划, 将手头问题的信息尽可能多地编码起来。 在此工作中, 我们提出了一个简单但有力的框架, 将数据中的基本差异用于构建 QML 模型, 通过构建尊重这些对称。 这些所谓的群变模型产生的结果, 在对称组 $\ mathfrak{G} 的任何元素下, 特别是对于大问题大小而言, 这些模型很可能具有不易变的问题。 因此, 我们提出理论结果, 用于设计 $\ mathfrak{G} $- 的模型, 并且通过几个模型化的 QML分类任务来解释它们的应用, 包括当 $\ masfrak{G} Q- group 这样的案例。 这些所谓的群落将使得我们可以持续地算法化的模型, 当我们逐渐恢复。 的组, 当我们精化的精化的精化的精化的精化的组, 将它成为一个新的精化的精化的精化的精化的组。