The increasing complexity of algorithms for analyzing medical data, including de-identification tasks, raises the possibility that complex algorithms are learning not just the general representation of the problem, but specifics of given individuals within the data. Modern legal frameworks specifically prohibit the intentional or accidental distribution of patient data, but have not addressed this potential avenue for leakage of such protected health information. Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models. Recent research in the field has highlighted such issues in non-medical data, but all analysis is likely to be data and algorithm specific. We, therefore, chose to analyze a state-of-the-art free-text de-identification algorithm based on LSTM (Long Short-Term Memory) and its potential in encoding any individual in the training set. Using the i2b2 Challenge Data, we trained, then analyzed the model to assess whether the output of the LSTM, before the compression layer of the classifier, could be used to estimate the membership of the training data. Furthermore, we used different attacks including membership inference attack method to attack the model. Results indicate that the attacks could not identify whether members of the training data were distinguishable from non-members based on the model output. This indicates that the model does not provide any strong evidence into the identification of the individuals in the training data set and there is not yet empirical evidence it is unsafe to distribute the model for general use.
翻译:分析医疗数据的算法日益复杂,包括去身份鉴定任务,这就有可能使复杂的算法不仅了解问题的一般代表性,而且了解数据中特定个人的具体特点。现代法律框架明确禁止故意或意外地分发病人数据,但没有解决这种泄漏这种受保护健康信息的潜在途径。现代深层学习算法由于模型的复杂性而具有这种渗漏的最大潜力。最近的实地研究突出了非医疗数据中的这类问题,但所有分析都有可能是数据和算法的具体。因此,我们选择分析基于LSTM(长期短期记忆)的、最先进的自由文本去身份算法及其在将任何人编入培训集方面的潜力。使用i2b2挑战数据,我们进行了培训,然后分析了模型,以评估LSTM(在分类器压缩层之前)的产出是否可用于估计培训数据的组成情况。此外,我们使用了不同的攻击,包括成员攻击方法来攻击模型。结果显示,攻击活动无法在培训中将任何人编成任何可靠的数据,因此,培训成员不能在数据库中提供可靠的数据。