Variational quantum algorithms are expected to demonstrate the advantage of quantum computing on near-term noisy quantum computers. However, training such variational quantum algorithms suffers from gradient vanishing as the size of the algorithm increases. Previous work cannot handle the gradient vanishing induced by the inevitable noise effects on realistic quantum hardware. In this paper, we propose a novel training scheme to mitigate such noise-induced gradient vanishing. We first introduce a new cost function of which the gradients are significantly augmented by employing traceless observables in truncated subspace. We then prove that the same minimum can be reached by optimizing the original cost function with the gradients from the new cost function. Experiments show that our new training scheme is highly effective for major variational quantum algorithms of various tasks.
翻译:预计变化量子算法将展示量子计算在近期噪音量子计算机上的优势。 然而, 培训这种变异量子算法会随着算法规模的扩大而消失, 渐变量算法也会随着渐变而消失。 先前的工作无法处理因对现实的量子硬件不可避免的噪音影响而消失的梯子。 在本文中, 我们提出一个新的培训计划, 以减轻这种噪音引起的梯子消失。 我们首先引入一个新的成本功能, 在短空的子空间中使用无痕量观测器, 使梯子大大增强。 我们然后证明, 通过优化新成本函数中的原始成本函数, 就能达到同样的最低值。 实验显示, 我们的新培训计划对于各种任务的主要变异量量算法非常有效 。