TREs are widely, and increasingly used to support statistical analysis of sensitive data across a range of sectors (e.g., health, police, tax and education) as they enable secure and transparent research whilst protecting data confidentiality. There is an increasing desire from academia and industry to train AI models in TREs. The field of AI is developing quickly with applications including spotting human errors, streamlining processes, task automation and decision support. These complex AI models require more information to describe and reproduce, increasing the possibility that sensitive personal data can be inferred from such descriptions. TREs do not have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training. GRAIMATTER has developed a draft set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. The development of these recommendations has been funded by the GRAIMATTER UKRI DARE UK sprint research project. This version of our recommendations was published at the end of the project in September 2022. During the course of the project, we have identified many areas for future investigations to expand and test these recommendations in practice. Therefore, we expect that this document will evolve over time.
翻译:这些复杂的AI模型需要更多资料来描述和复制,从而增加从这些描述中推断出敏感个人数据的可能性。TRE没有成熟的流程和控制这些风险。这是一个复杂的专题,期望所有TRE都意识到所有风险或TRE研究人员在AI具体培训中应对这些风险是不合理的。GRAMAATTER为TRE制定了一套可使用的建议草案,以便在从TRES披露经过培训的AI模型时防范额外的风险。我们确定的许多建议将在2022年的测试过程中进行。