Speech data is expensive to collect, and incredibly sensitive to its sources. It is often the case that organizations independently collect small datasets for their own use, but often these are not performant for the demands of machine learning. Organizations could pool these datasets together and jointly build a strong ASR system; sharing data in the clear, however, comes with tremendous risk, in terms of intellectual property loss as well as loss of privacy of the individuals who exist in the dataset. In this paper, we offer a potential solution for learning an ML model across multiple organizations where we can provide mathematical guarantees limiting privacy loss. We use a Federated Learning approach built on a strong foundation of Differential Privacy techniques. We apply these to a senone classification prototype and demonstrate that the model improves with the addition of private data while still respecting privacy.
翻译:语音数据收集费用昂贵,而且对其来源极为敏感。通常,各组织独立收集小数据集供自己使用,但这些数据集往往不适合机器学习的需求。各组织可以将这些数据集集中起来,共同建立一个强大的ASR系统;但共享数据显然会带来巨大的风险,在知识产权损失以及数据集中个人隐私损失方面。在本文中,我们为在多个组织中学习ML模型提供了潜在的解决方案,我们可以在多个组织中提供数学保障,限制隐私损失。我们采用了基于差异隐私技术的联邦学习方法。我们将这些数据应用到一个老式分类原型中,并表明在尊重隐私的同时,在增加私人数据的同时,模型会得到改善。