Multi-party machine learning is a paradigm in which multiple participants collaboratively train a machine learning model to achieve a common learning objective without sharing their privately owned data. The paradigm has recently received a lot of attention from the research community aimed at addressing its associated privacy concerns. In this work, we focus on addressing the concerns of data privacy, model privacy, and data quality associated with privacy-preserving multi-party machine learning, i.e., we present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy. In particular, we propose a novel metric called weight similarity that is securely computed and used to check whether a participant can be categorized as a reliable participant (holds good quality data) or not. The problems of model and data privacy are tackled by integrating homomorphic encryption in our scheme and uploading encrypted weights, which prevent leakages to the server and malicious participants, respectively. The analytical and experimental evaluations of our scheme demonstrate that it is accurate and ensures data and model privacy.
翻译:多党机器学习是一个范例,许多参与者合作培训一个机器学习模式,以实现共同学习目标,而不分享其私有数据。该范例最近受到研究界的极大关注,旨在解决与其相关的隐私问题。在这项工作中,我们侧重于解决与隐私保护多党机器学习有关的数据隐私、模式隐私和数据质量问题,即我们提出一个保护隐私合作学习计划,既检查参与者的数据质量,又保障数据和模型隐私。特别是,我们提出一个名为“重量相似”的新指标,称为安全计算,用来检查参与者能否被归类为可靠的参与者(拥有高质量的数据 ) 。模型和数据隐私问题通过将同质加密纳入我们的计划并上载加密重量来解决,这分别防止向服务器和恶意参与者泄漏。对我们的计划进行分析和实验性评估表明,该计划是准确的,确保数据和模型隐私。