In this paper, considering the balance of data/model privacy of model owners and user needs, we propose a new setting called Back-Propagated Black-Box Adaptation (BPBA) for users to better train their private models via the guidance of the back-propagated results of a Black-box foundation/source model. Our setting can ease the usage of foundation/source models as well as prevent the leakage and misuse of foundation/source models. Moreover, we also propose a new training strategy called Bootstrap The Original Latent (BTOL) to fully utilize the foundation/source models. Our strategy consists of a domain adapter and a freeze-and-thaw strategy. We apply our BTOL under BPBA and Black-box UDA settings on three different datasets. Experiments show that our strategy is efficient and robust in various settings without manual augmentations.
翻译:----
机器翻译的中文摘要:本文针对模型所有者的数据/模型隐私平衡和用户需求提出了一种新的设置,称为反向传播的黑盒适应(BPBA),用户可以通过黑盒基础/源模型的反向传播结果指导更好地训练私有模型。 我们的设置可以简化使用基础/源模型的方法,并防止泄漏和滥用。 此外,我们还提出了一种新的训练策略,称为Bootstrap原始潜在因素(BTOL),旨在充分利用基础/源模型。 我们的策略包括域适配器和冻结解冻策略。 我们在三个不同的数据集上应用了BPBA和Black-box UDA下的BTOL。实验证明,我们的策略在各种设置中均高效且鲁棒,无需手动增强。