A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them. Under this self-training paradigm, some competitive methods have sought to the latent-space information, which establishes the feature centroids (a.k.a prototypes) of the semantic classes and determines the pseudo label candidates by their distances from these centroids. In this paper, we argue that the latent space contains more information to be exploited thus taking one step further to capitalize on it. Firstly, instead of merely using the source-domain prototypes to determine the target pseudo labels as most of the traditional methods do, we bidirectionally produce the target-domain prototypes to degrade those source features which might be too hard or disturbed for the adaptation. Secondly, existing attempts simply model each category as a single and isotropic prototype while ignoring the variance of the feature distribution, which could lead to the confusion of similar categories. To cope with this issue, we propose to represent each category with multiple and anisotropic prototypes via Gaussian Mixture Model, in order to fit the de facto distribution of source domain and estimate the likelihood of target samples based on the probability density. We apply our method on GTA5->Cityscapes and Synthia->Cityscapes tasks and achieve 61.2 and 62.8 respectively in terms of mean IoU, substantially outperforming other competitive self-training methods. Noticeably, in some categories which severely suffer from the categorical confusion such as "truck" and "bus", our method achieves 56.4 and 68.8 respectively, which further demonstrates the effectiveness of our design.
翻译:域的适应性分割趋势蓬勃发展, 以生成目标域的高质量假标签, 并重新对分区进行再培训。 在这种自我培训范式下, 一些竞争方法寻求潜空信息, 从而建立语义类的特有缩略图( a.k.a 原型), 并通过与这些小行星的距离来决定假标签对象。 在本文中, 我们争辩说, 潜伏空间包含更多的信息, 从而进一步加以利用。 首先, 我们不仅使用源域原型来确定目标域的假标签( 大部分传统方法都是这样做的 ), 我们双向地生成目标域原型( 目标域) 来降低那些可能过于困难或干扰的源特性。 其次, 现有的尝试只是将每个类别建成一个单一和偏移的原型模型, 同时忽略地段分布的差异, 这可能导致类似类别的混淆。 为了解决这个问题, 我们提议以多种类型和异系原型原型原型( 通过高斯Mix模型) 来确定目标 。 我们的自向目标-daldoal 原型原型模型和Syal- drealalalalalalalalalalalalalalal disal dislightation 分别分别分别地标定了“ 方法,, 和我们各自在地标定的概率设计Gs 方法中, 和Gs dre dre disqubreal dre disal disal disal disal disal disal disal dro dro disal disal disal disal des ro dro drod labal drod des des des rod rod lax rod rod lad labal robal robal rod rod robal lad rodal labal robal robal robal rod lad rod rod rod rod rod rod rod rod rod rod rod labal labal labal a rod ro ro rodal a lad lad