For the Domain Generalization (DG) problem where the hypotheses are composed of a common representation function followed by a labeling function, we point out a shortcoming in existing approaches that fail to explicitly optimize for a term, appearing in a well-known and widely adopted upper bound to the risk on the unseen domain, that is dependent on the representation to be learned. To this end, we first derive a novel upper bound to the prediction risk. We show that imposing a mild assumption on the representation to be learned, namely manifold restricted invertibility, is sufficient to deal with this issue. Further, unlike existing approaches, our novel upper bound doesn't require the assumption of Lipschitzness of the loss function. In addition, the distributional discrepancy in the representation space is handled via the Wasserstein-2 barycenter cost. In this context, we creatively leverage old and recent transport inequalities, which link various optimal transport metrics, in particular the $L^1$ distance (also known as the total variation distance) and the Wasserstein-2 distances, with the Kullback-Liebler divergence. These analyses and insights motivate a new representation learning cost for DG that additively balances three competing objectives: 1) minimizing classification error across seen domains via cross-entropy, 2) enforcing domain-invariance in the representation space via the Wasserstein-2 barycenter cost, and 3) promoting non-degenerate, nearly-invertible representation via one of two mechanisms, viz., an autoencoder-based reconstruction loss or a mutual information loss. It is to be noted that the proposed algorithms completely bypass the use of any adversarial training mechanism that is typical of many current domain generalization approaches. Simulation results on several standard datasets demonstrate superior performance compared to several well-known DG algorithms.
翻译:对于由共同代表功能和标签功能构成的域名通用(DG)问题,我们指出,现有方法中存在一个缺陷,即没有明确优化一个术语,出现在一个众所周知和广泛接受的隐蔽域风险的上限,这取决于要学习的表述。为此,我们首先得出一个与预测风险相连接的新颖的上限。我们表明,对所学的表述设定设定一个温和的假设,即多重限制可视性,足以解决这一问题。此外,与现有方法不同,我们新的上层约束并不要求假设损失函数的利普西茨再优化。此外,代表空间的分布差异是通过瓦塞斯坦-2 百分点成本成本处理的,在这方面,我们创造性地利用旧的和最近的运输不平等,将各种最佳运输指标联系起来,特别是$1美元(也称为整体变异距离)和瓦塞斯坦-2距离,以及Kullback-Liebler差异。这些分析和洞察显示,通过不同领域平流成本分析,通过不同领域进行新的代表性学习,通过不同领域平流数据分析显示,通过不同领域平流数据分析显示,通过不同领域平流数据余额显示,通过若干成本分析显示。