Generative modeling using samples drawn from the probability distribution constitutes a powerful approach for unsupervised machine learning. Quantum mechanical systems can produce probability distributions that exhibit quantum correlations which are difficult to capture using classical models. We show theoretically that such quantum correlations provide a powerful resource for generative modeling. In particular, we provide an unconditional proof of separation in expressive power between a class of widely-used generative models, known as Bayesian networks, and its minimal quantum extension. We show that this expressivity advantage is associated with quantum nonlocality and quantum contextuality. Furthermore, we numerically test this separation on standard machine learning data sets and show that it holds for practical problems. The possibility of quantum advantage demonstrated in this work not only sheds light on the design of useful quantum machine learning protocols but also provides inspiration to draw on ideas from quantum foundations to improve purely classical algorithms.
翻译:利用从概率分布中提取的样本进行生成模型是不受监督的机器学习的有力方法。 量子机械系统可以产生概率分布,显示难以使用古典模型采集的量子相关关系。 我们理论上表明,这种量子相关关系为基因建模提供了强大的资源。 特别是,我们提供了无条件的证据,证明一种广泛使用的基因化模型(即巴伊西亚网络)及其最小量扩展之间的显性分量。 我们表明,这种表达性优势与量子非地点性和量子背景质量相关。 此外,我们用数字方法测试标准机器学习数据集中的分离,并表明它存在实际问题。 这项工作中展示的量子优势的可能性不仅为有用的量子机器学习协议的设计提供了启发,而且还提供了灵感,用以利用量子基础的想法改进纯古典算法。