Federated learning has attracted considerable interest for collaborative machine learning in healthcare to leverage separate institutional datasets while maintaining patient privacy. However, additional challenges such as poor calibration and lack of interpretability may also hamper widespread deployment of federated models into clinical practice and lead to user distrust or misuse of ML tools in high-stakes clinical decision-making. In this paper, we propose to address these challenges by incorporating an adaptive conformal framework into federated learning to ensure distribution-free prediction sets that provide coverage guarantees and uncertainty estimates without requiring any additional modifications to the model or assumptions. Empirical results on the MedMNIST medical imaging benchmark demonstrate our federated method provide tighter coverage in lower average cardinality over local conformal predictions on 6 different medical imaging benchmark datasets in 2D and 3D multi-class classification tasks. Further, we correlate class entropy and prediction set size to assess task uncertainty with conformal methods.
翻译:联邦学习已经吸引了对保健合作机器学习的极大兴趣,以利用单独的机构数据集,同时维护患者隐私;然而,其他挑战,如校准差和缺乏可解释性,也可能妨碍将联邦模式广泛应用于临床实践,导致用户不信任或滥用ML工具进行临床决策;在本文件中,我们提议应对这些挑战,将适应性符合框架纳入联邦学习,以确保无分发性的预测数据集,提供覆盖面保障和不确定性估计,而无需对模型或假设做任何额外的修改;MedMNIST医疗成像基准的经验显示,我们的联邦方法比地方对2D和3D多级六种不同医学成像基准数据集的符合性预测提供更紧密的平均基点覆盖。此外,我们把等级的诱变和预测设定大小联系起来,以便用符合的方法评估任务的不确定性。