We introduce two quantum algorithms for solving structured prediction problems. We first show that a stochastic gradient descent that uses the quantum minimum finding algorithm and takes its probabilistic failure into account solves the structured prediction problem with a runtime that scales with the square root of the size of the label space, and in $\widetilde O\left(1/\epsilon\right)$ with respect to the precision, $\epsilon$, of the solution. Motivated by robust inference techniques in machine learning, we then introduce another quantum algorithm that solves a smooth approximation of the structured prediction problem with a similar quantum speedup in the size of the label space and a similar scaling in the precision parameter. In doing so, we analyze a variant of stochastic gradient descent for convex optimization in the presence of an additive error in the calculation of the gradients, and show that its convergence rate does not deteriorate if the additive errors are of the order $O(\sqrt\epsilon)$. This algorithm uses quantum Gibbs sampling at temperature $\Omega (\epsilon)$ as a subroutine. Based on these theoretical observations, we propose a method for using quantum Gibbs samplers to combine feedforward neural networks with probabilistic graphical models for quantum machine learning. Our numerical results using Monte Carlo simulations on an image tagging task demonstrate the benefit of the approach.
翻译:我们引入了两种量子算法来解决结构化预测问题。 我们首先显示, 使用量子最小值查找算法, 并且将其概率性失常考虑在内, 使用标签空间大小的平方根以运行时间解决结构化预测问题, 使用标签空间大小的平方根, 并在计算梯度时出现添加错误时, 以 $\ left (1/\ epsilon\ right) 来分析结构化梯度梯度下降的变种, 并显示如果添加错误是按 $( sqrt\ epsilon) 的顺序排序, 则其趋同率不会恶化。 这个算法在温度 $\ Omega (epslon) 和类似量子加速度参数的类似缩放时, 解决结构化预测问题的平稳近似近似近。 在这样做时, 我们分析了在计算梯度的计算中, 使用一个数字级模型, 将一个数字级的梯度梯度下降的梯度下降的梯度下降速度率, 将一个数字模型模型用于模拟的模型 。