To address COVID-19 healthcare challenges, we need frequent sharing of health data, knowledge and resources at a global scale. However, in this digital age, data privacy is a big concern that requires the secure embedding of privacy assurance into the design of all technological solutions that use health data. In this paper, we introduce differential privacy by design (dPbD) framework and discuss its embedding into the federated machine learning system. To limit the scope of our paper, we focus on the problem scenario of COVID-19 imaging data privacy for disease diagnosis by computer vision and deep learning approaches. We discuss the evaluation of the proposed design of federated machine learning systems and discuss how differential privacy by design (dPbD) framework can enhance data privacy in federated learning systems with scalability and robustness. We argue that scalable differentially private federated learning design is a promising solution for building a secure, private and collaborative machine learning model such as required to combat COVID19 challenge.
翻译:为了应对COVID-19医疗方面的挑战,我们需要在全球范围经常分享卫生数据、知识和资源,然而,在这个数字时代,数据隐私是一个重大关切问题,需要将隐私保障安全地纳入使用卫生数据的所有技术解决方案的设计之中。在本文中,我们通过设计引入了差异隐私框架,并讨论了将其嵌入联邦机器学习系统的问题。为了限制我们的文件的范围,我们侧重于以计算机愿景和深层次学习方法进行疾病诊断的COVID-19成像数据隐私问题。我们讨论了拟议设计联邦机器学习系统的评价,并讨论了设计(dPD)框架如何通过设计加强联邦学习系统具有可扩展性和稳健性的数据隐私。我们指出,可扩展的私人联合学习设计是建设安全、私人和协作机器学习模式的可行解决方案,如应对COVID19挑战所需要的。