There has been a growing interest in unsupervised domain adaptation (UDA) to alleviate the data scalability issue, while the existing works usually focus on classifying independently discrete labels. However, in many tasks (e.g., medical diagnosis), the labels are discrete and successively distributed. The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space. Target for this, the partially ordered set (poset) is defined for constraining the latent vector. Instead of the typically i.i.d. Gaussian latent prior, in this work, a recursively conditional Gaussian (RCG) set is proposed for ordered constraint modeling, which admits a tractable joint distribution prior. Furthermore, we are able to control the density of content vectors that violate the poset constraint by a simple "three-sigma rule". We explicitly disentangle the cross-domain images into a shared ordinal prior induced ordinal content space and two separate source/target ordinal-unrelated spaces, and the self-training is worked on the shared space exclusively for ordinal-aware domain alignment. Extensive experiments on UDA medical diagnoses and facial age estimation demonstrate its effectiveness.
翻译:人们对未经监督的域适应(UDA)的兴趣日益浓厚,以缓解数据可缩放问题,而现有工作通常侧重于独立离散标签的分类。然而,在许多任务(如医学诊断)中,标签是互不相连和相继分布的。用于正态分类的UDA要求诱导在潜伏空间之前的非三边或双边分布。为此,部分定购的(目标)设置(部分定购的)是用来限制潜伏矢量的。在这项工作中,通常的 i.d. 之前的 Gausian 潜藏空间是循环性有条件的Gaussian (RCG) 设置, 用于有定序的制约模型, 之前是允许可拉动的联合分布。 此外, 我们有能力控制内容矢量的密度, 以简单的“ 三色规则” 来违反表面限制。 我们明确地将交叉图像分解成一个共同的或底色内容空间, 以及两个单独的源/目标或无关联的空间空间, 自我训练是在共同空间空间上进行直观或直观的镜式和空间分析。