Good training data is a prerequisite to develop useful ML applications. However, in many domains existing data sets cannot be shared due to privacy regulations (e.g., from medical studies). This work investigates a simple yet unconventional approach for anonymized data synthesis to enable third parties to benefit from such private data. We explore the feasibility of learning implicitly from unrealistic, task-relevant stimuli, which are synthesized by exciting the neurons of a trained deep neural network (DNN). As such, neuronal excitation serves as a pseudo-generative model. The stimuli data is used to train new classification models. Furthermore, we extend this framework to inhibit representations that are associated with specific individuals. We use sleep monitoring data from both an open and a large closed clinical study and evaluate whether (1) end-users can create and successfully use customized classification models for sleep apnea detection, and (2) the identity of participants in the study is protected. Extensive comparative empirical investigation shows that different algorithms trained on the stimuli are able generalize successfully on the same task as the original model. However, architectural and algorithmic similarity between new and original models play an important role in performance. For similar architectures, the performance is close to that of using the true data (e.g., Accuracy difference of 0.56\%, Kappa coefficient difference of 0.03-0.04). Further experiments show that the stimuli can to a large extent successfully anonymize participants of the clinical studies.
翻译:良好的培训数据是开发有用的ML应用的先决条件。然而,在许多领域,由于隐私条例(例如医学研究),现有数据集无法共享。这项工作调查了匿名化数据合成的简单而非常规的方法,使第三方能够从这种私人数据中受益。我们探讨从不切实际的、与任务相关的刺激因素中隐含学习的可行性,这些刺激因素是由受过训练的深层神经网络(DNN)的神经元合成的。因此,神经刺激作为一种假基因模型。Stimuli数据用于培训新的分类模型。此外,我们扩大这一框架以抑制与具体个人相关的表现。我们使用开放和大型封闭临床研究提供的睡眠监测数据,并评估(1) 最终用户能够创建并成功使用定制的睡眠性安眠检测分类模型,(2) 研究参与者的身份受到保护。广泛的比较经验调查表明,经过培训的Stimuli值不同算法能够成功地概括与原模型相同的任务。然而,我们扩大这一框架的范围是为了抑制与特定个人相关的表现。我们使用开放和大规模封闭的临床监测数据的范围。A型模型的精确性差异是新的和初始性模型中的一种重要表现。