Adversarial robustness and generalization are both crucial properties of reliable machine learning models. In this paper, we study these properties in the context of quantum machine learning based on Lipschitz bounds. We derive parameter-dependent Lipschitz bounds for quantum models with trainable encoding, showing that the norm of the data encoding has a crucial impact on the robustness against data perturbations. Further, we derive a bound on the generalization error which explicitly involves the parameters of the data encoding. Our theoretical findings give rise to a practical strategy for training robust and generalizable quantum models by regularizing the Lipschitz bound in the cost. Further, we show that, for fixed and non-trainable encodings, as those frequently employed in quantum machine learning, the Lipschitz bound cannot be influenced by tuning the parameters. Thus, trainable encodings are crucial for systematically adapting robustness and generalization during training. The practical implications of our theoretical findings are illustrated with numerical results.
翻译:对抗鲁棒性与泛化能力均是可靠机器学习模型的关键属性。本文基于Lipschitz界研究量子机器学习背景下这些性质。我们推导了含可训练编码的量子模型的参数依赖型Lipschitz界,证明数据编码的范数对抵御数据扰动的鲁棒性具有决定性影响。进一步,我们推导出显式包含数据编码参数的泛化误差界。理论发现催生了通过代价函数中Lipschitz界正则化来训练鲁棒可泛化量子模型的实用策略。此外,我们证明对于量子机器学习中常用的固定式不可训练编码,其Lipschitz界无法通过参数调节改变。因此可训练编码对于在训练中系统调控鲁棒性与泛化能力至关重要。数值结果验证了我们理论发现的实际意义。