The field of artificial neural networks is expected to strongly benefit from recent developments of quantum computers. In particular, quantum machine learning, a class of quantum algorithms which exploit qubits for creating trainable neural networks, will provide more power to solve problems such as pattern recognition, clustering and machine learning in general. The building block of feed-forward neural networks consists of one layer of neurons connected to an output neuron that is activated according to an arbitrary activation function. The corresponding learning algorithm goes under the name of Rosenblatt perceptron. Quantum perceptrons with specific activation functions are known, but a general method to realize arbitrary activation functions on a quantum computer is still lacking. Here we fill this gap with a quantum algorithm which is capable to approximate any analytic activation functions to any given order of its power series. Unlike previous proposals providing irreversible measurement--based and simplified activation functions, here we show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information. Thanks to the generality of this construction, any feed-forward neural network may acquire the universal approximation properties according to Hornik's theorem. Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
翻译:人造神经网络领域预计将大大受益于量子计算机的最新开发。 特别是, 量子机器学习( 量子机器学习, 利用量子算法来创建可训练的神经网络的一类量子算法, 将为解决模式识别、 集群和一般的机体学习等问题提供更大的力量。 向导神经网络的构件由一组神经神经元组成, 这些神经元与根据任意激活功能激活的产出神经元相连。 相应的学习算法以Rosenblatt perceptron 的名称命名。 具有特定激活功能的量子机透视器已经众所周知, 但仍然缺乏一种在量子计算机上实现任意激活功能的一般方法。 我们在这里用量子算法填补这一空白, 它可以将任何分析性激活功能与其任何特定序列的序列相近。 与以前的建议不同, 提供不可逆转的测量基础和简化的激活功能, 这里我们展示了如何将任何分析性功能与任何必要的精确性相匹配, 而无需测量国家的信息编码。 由于这一构造的笼统性, 任何进向前科学网络都可能获取到一个量子计算机的全基结构的硬质结构。