We study supervised learning algorithms in which a quantum device is used to perform a computational subroutine - either for prediction via probability estimation, or to compute a kernel via estimation of quantum states overlap. We design implementations of these quantum subroutines using Boson Sampling architectures in linear optics, supplemented by adaptive measurements. We then challenge these quantum algorithms by deriving classical simulation algorithms for the tasks of output probability estimation and overlap estimation. We obtain different classical simulability regimes for these two computational tasks in terms of the number of adaptive measurements and input photons. In both cases, our results set explicit limits to the range of parameters for which a quantum advantage can be envisaged with adaptive linear optics compared to classical machine learning algorithms: we show that the number of input photons and the number of adaptive measurements cannot be simultaneously small compared to the number of modes. Interestingly, our analysis leaves open the possibility of a near-term quantum advantage with a single adaptive measurement.
翻译:我们研究有监督的学习算法,其中使用量子装置来进行计算子常规—— 要么通过概率估计进行预测, 要么通过估计量子体重叠来计算内核。 我们用线性光学的博森抽样结构设计这些量子常规结构,辅之以适应性测量。 然后我们通过为产出概率估计和重叠估计的任务得出经典模拟算法来挑战量子算法。 我们从适应性测量和输入光子的数量上获得这两类计算任务不同的经典模拟模拟制度。 在这两种情况下,我们的结果都为参数的范围设定了明确的限制,相对于经典机器的学习算法,可以设想适应性线性线性光学的参数具有量子优势:我们显示输入光子的数量和适应性测量的数量不能与模式的数量同时很小。有趣的是,我们的分析为单一的适应性测量提供了近距离量子优势的可能性。