Healthcare AI needs large, diverse datasets, yet strict privacy and governance constraints prevent raw data sharing across institutions. Federated learning (FL) mitigates this by training where data reside and exchanging only model updates, but practical deployments still face two core risks: (1) privacy leakage via gradients or updates (membership inference, gradient inversion) and (2) trust in the aggregator, a single point of failure that can drop, alter, or inject contributions undetected. We present zkFL-Health, an architecture that combines FL with zero-knowledge proofs (ZKPs) and Trusted Execution Environments (TEEs) to deliver privacy-preserving, verifiably correct collaborative training for medical AI. Clients locally train and commit their updates; the aggregator operates within a TEE to compute the global update and produces a succinct ZK proof (via Halo2/Nova) that it used exactly the committed inputs and the correct aggregation rule, without revealing any client update to the host. Verifier nodes validate the proof and record cryptographic commitments on-chain, providing an immutable audit trail and removing the need to trust any single party. We outline system and threat models tailored to healthcare, the zkFL-Health protocol, security/privacy guarantees, and a performance evaluation plan spanning accuracy, privacy risk, latency, and cost. This framework enables multi-institutional medical AI with strong confidentiality, integrity, and auditability, key properties for clinical adoption and regulatory compliance.


翻译:医疗AI需要大规模、多样化的数据集,然而严格的隐私和治理限制阻碍了机构间的原始数据共享。联邦学习通过在原数据所在地进行训练并仅交换模型更新来缓解这一问题,但实际部署仍面临两个核心风险:(1) 通过梯度或更新导致的隐私泄露(成员推理、梯度反演),以及(2) 对聚合器的信任问题——聚合器作为单一故障点,可能丢弃、篡改或注入贡献而无法被察觉。我们提出了zkFL-Health,一种将联邦学习与零知识证明和可信执行环境相结合的架构,旨在为医疗AI提供隐私保护且可验证正确的协同训练。客户端在本地训练并提交其更新;聚合器在TEE内运行,计算全局更新并生成简洁的ZK证明(通过Halo2/Nova),证明其严格使用了已提交的输入和正确的聚合规则,而无需向主机透露任何客户端更新。验证节点验证该证明并将加密承诺记录在链上,提供不可篡改的审计追踪,从而消除了对任何单一方的信任需求。我们概述了针对医疗场景定制的系统与威胁模型、zkFL-Health协议、安全/隐私保证,以及涵盖准确性、隐私风险、延迟和成本的性能评估计划。该框架为多机构医疗AI应用提供了强大的机密性、完整性和可审计性,这些是临床采用和监管合规的关键属性。

0
下载
关闭预览

相关内容

人工智能杂志AI(Artificial Intelligence)是目前公认的发表该领域最新研究成果的主要国际论坛。该期刊欢迎有关AI广泛方面的论文,这些论文构成了整个领域的进步,也欢迎介绍人工智能应用的论文,但重点应该放在新的和新颖的人工智能方法如何提高应用领域的性能,而不是介绍传统人工智能方法的另一个应用。关于应用的论文应该描述一个原则性的解决方案,强调其新颖性,并对正在开发的人工智能技术进行深入的评估。 官网地址:http://dblp.uni-trier.de/db/journals/ai/
Top
微信扫码咨询专知VIP会员