Recently, vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks. Considering privacy-preserving distributed learning with ViT, federated learning (FL) communicates models, which becomes ill-suited due to ViT' s large model size and computing costs. Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT' s smashed data and input data. Motivated by this problem, we propose DP-CutMixSL, a differentially private (DP) SL framework by developing DP patch-level randomized CutMix (DP-CutMix), a novel privacy-preserving inter-client interpolation scheme that replaces randomly selected patches in smashed data. By experiment, we show that DP-CutMixSL not only boosts privacy guarantees and communication efficiency, but also achieves higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that DP-CutMix amplifies R\'enyi DP (RDP), which is upper-bounded by its Vanilla Mixup counterpart.
翻译:最近,视觉变异器(ViT)开始在计算机视觉任务中超越常规CNN。 考虑与ViT保持隐私分布式学习, 联合学习(FL)通信模式,由于ViT的庞大模型规模和计算成本而变得不合适。 分解学习(SL), 在剪切层上传递破碎的数据, 却受数据隐私泄漏和大量通信费用的影响, 但由于ViT的破碎数据和输入数据之间高度相似性, 从而导致数据隐私泄漏和通信费用巨大。 受这一问题的驱动, 我们提出DP- CutMixSL, 一个有差别的私人(DP) SL 框架, 开发DP 偏差级随机化的 CutMix(DP- CutMix), 这是一种新的隐私保护客户间插图计划, 取代了破碎数据中随机选择的补丁。 我们通过实验, 显示DP- CutMixSL不仅提升了隐私保障和通信效率, 而且还实现了Vanilla SL的准确性。 理论上, 我们分析了DP- CutMix 放大Renyi DP(RDP) 的对等对口对口。