Quantum machine learning (QML) can complement the growing trend of using learned models for a myriad of classification tasks, from image recognition to natural speech processing. A quantum advantage arises due to the intractability of quantum operations on a classical computer. Many datasets used in machine learning are crowd sourced or contain some private information. To the best of our knowledge, no current QML models are equipped with privacy-preserving features, which raises concerns as it is paramount that models do not expose sensitive information. Thus, privacy-preserving algorithms need to be implemented with QML. One solution is to make the machine learning algorithm differentially private, meaning the effect of a single data point on the training dataset is minimized. Differentially private machine learning models have been investigated, but differential privacy has yet to be studied in the context of QML. In this study, we develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm. This marks the first proof-of-principle demonstration of privacy-preserving QML. The experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy. Although the quantum model is simulated and tested on a classical computer, it demonstrates potential to be efficiently implemented on near-term quantum devices (noisy intermediate-scale quantum [NISQ]). The approach's success is illustrated via the classification of spatially classed two-dimensional datasets and a binary MNIST classification. This implementation of privacy-preserving QML will ensure confidentiality and accurate learning on NISQ technology.
翻译:量子机器学习(QML) 能够补充从图像识别到自然语音处理等多种分类任务使用学习的模型的日益增长的趋势,从图像识别到自然语音处理。由于古典计算机的量子操作不易吸引,因此产生了量子优势。许多机器学习中使用的数据集都是人群源或包含一些私人信息。据我们所知,目前QML模型没有配备保护隐私的混合量子模型,这引起了人们的担忧,因为模型不会暴露敏感信息。因此,需要与QML一起实施保存隐私的算法。一个解决办法是使机器学习算法具有差异性,这意味着将单一数据点对培训数据集的影响降到最低。已经调查了不同的私人机器学习模式,但尚未在QML背景下研究隐私差异性。我们开发了一个混合量子模型,该模型经过培训,使用差别化的私人优化算法来保护隐私。这标志着隐私保密分类的首次验证。实验表明,有差别的私人数学算法可以保护用户敏感度的机密性数据质量,而没有降低计算机定序的中间级工具。尽管测量了模型的精确度,但是测量了。