The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing epsilon-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.
翻译:医学和医疗领域深层学习的显著发展提出了明显的隐私问题,当深神经网络建立在用户个人和高度敏感数据(如临床记录、用户概况、生物医学图像等)的基础上时,深神经网络就会产生明显的隐私问题。然而,只进行了一些关于在深层学习中保护隐私的科学研究。在本文件中,我们侧重于开发一个私人的深层革命信仰网络(pCDBN),这基本上是在不同隐私下的一个深层信仰网络(CDBN),因此,我们实施普西隆差异隐私的主要想法是利用功能机制来渗透传统CDBN的能源目标功能,而不是其结果。这项工作的一个重要贡献是,我们提议利用Chebyshev的扩展来获取目标功能的大致多面性代表性。我们的理论分析表明,我们可以进一步获得大致多面性信仰网络(CDBN)的敏感度和错误界限。因此,在健康社会网络中保护不同隐私是可行的。我们运用了我们的模型,即“YiWell”数据,在笔记数字数据化数据分析、高位数数据分析、高度数据分析、高度数据分析、高度数据分析、高度数据分析、高度数据分析、高度数据模型分析、高度数据分析、高度数据分析、高度数据分析、高度分析、高度数据分析、高度数据分析、高度数据分析、高度数据分析、高度数据分析、高度数据分析。