We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain. A similar problem has been studied extensively in the unsupervised domain adaptation (UDA) literature, but existing UDA algorithms require access to both the source domain labeled data and the target domain unlabeled data for training a domain agnostic semantic segmentation model. Relaxing this constraint enables a user to adapt pretrained models to generalize in a target domain, without requiring access to source data. To this end, we learn a prototypical distribution for the source domain in an intermediate embedding space. This distribution encodes the abstract knowledge that is learned from the source domain. We then use this distribution for aligning the target domain distribution with the source domain distribution in the embedding space. We provide theoretical analysis and explain conditions under which our algorithm is effective. Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
翻译:我们开发了一个用于修改语义分解模型的算法,该算法是使用标签源域加以培训的,以便在未标注的目标域域内全面普及。类似的问题已经在未受监督的域适应(UDA)文献中进行了广泛研究,但现有的 UDA 算法需要同时访问源域标签数据和目标域域无标签数据,用于培训域不可知分解模型。放松这一限制使用户能够将预先培训的模式应用于目标域,而无需访问源数据。为此,我们在中间嵌入空间为源域学习了一种原型分布法。这种分布法编码了从源域内学到的抽象知识。然后我们利用这种分布法将目标域分布与嵌入空间内的源域分布相匹配。我们提供了理论分析和解释我们的算法有效条件。关于基准适应任务的实验表明,即使与联合的UDA方法相比,我们的方法也取得了竞争性的性表现。