The unsupervised domain adaptation (UDA) has been widely adopted to alleviate the data scalability issue, while the existing works usually focus on classifying independently discrete labels. However, in many tasks (e.g., medical diagnosis), the labels are discrete and successively distributed. The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space. Target for this, the partially ordered set (poset) is defined for constraining the latent vector. Instead of the typically i.i.d. Gaussian latent prior, in this work, a recursively conditional Gaussian (RCG) set is adapted for ordered constraint modeling, which admits a tractable joint distribution prior. Furthermore, we are able to control the density of content vector that violates the poset constraints by a simple "three-sigma rule". We explicitly disentangle the cross-domain images into a shared ordinal prior induced ordinal content space and two separate source/target ordinal-unrelated spaces, and the self-training is worked on the shared space exclusively for ordinal-aware domain alignment. Extensive experiments on UDA medical diagnoses and facial age estimation demonstrate its effectiveness.
翻译:未经监督的域适应(UDA)已被广泛采用,以缓解数据可缩放问题,而现有工作通常侧重于独立离散标签的分类。然而,在许多任务(如医疗诊断)中,标签是互不分开的,并相继分布。用于正潜空间之前,用于正态分类的UDA要求诱导非三边或半边分布。为此的目标,部分定购的设置(设置)是为了限制潜在矢量。比通常的i.d.d. 更典型的 U.d. 高萨潜潜值之前,一个循环性有条件的高萨(RCG)集被调整为有定序的制约模型,其中承认在前可拉动的联合分布。此外,我们有能力控制内容矢量的密度,通过简单的“三象规则”来克服表面限制。我们明确地将交叉图像分离成一个共同的或dinal 先前诱导出或有色内容的空间,以及两个单独的源/目标或无关联的空间,而自我培训是在共同的空间上进行医学实验,仅用于进行大度和表面年龄分析。