In this paper, considering the balance of data/model privacy of model owners and user needs, we propose a new setting called Back-Propagated Black-Box Adaptation (BPBA) for users to better train their private models via the guidance of the back-propagated results of a Black-box foundation/source model. Our setting can ease the usage of foundation/source models as well as prevent the leakage and misuse of foundation/source models. Moreover, we also propose a new training strategy called Bootstrap The Original Latent (BTOL) to fully utilize the foundation/source models. Our strategy consists of a domain adapter and a freeze-and-thaw strategy. We apply our BTOL under BPBA and Black-box UDA settings on three different datasets. Experiments show that our strategy is efficient and robust in various settings without manual augmentations.
翻译:在本文中,考虑到模型拥有者的数据/模型隐私与用户需要之间的平衡,我们提议一个新的环境,称为“后发黑包适应”(BBBBA),供用户通过黑箱基金/源模型的后传结果指导,更好地培训其私人模型。我们的设置可以方便基础/源模型的使用,并防止基础/源模型的渗漏和滥用。此外,我们还提议一个新的培训战略,称为“诱饵陷阱”“原始冷藏”(BTOL),以充分利用基础/源模型。我们的战略包括一个域适配器和冻结和断层战略。我们在三个不同的数据集中应用BBBBA和黑箱UDA设置的BTOL。实验表明,我们的战略在各种环境中都是高效和稳健的,没有手动增强功能。</s>