The ability to generalize out-of-domain (OOD) is an important goal for deep neural network development, and researchers have proposed many high-performing OOD generalization methods from various foundations. While many OOD algorithms perform well in various scenarios, these systems are evaluated as ``black-boxes''. Instead, we propose a flexible framework that evaluates OOD systems with finer granularity using a probing module that predicts the originating domain from intermediate representations. We find that representations always encode some information about the domain. While the layerwise encoding patterns remain largely stable across different OOD algorithms, they vary across the datasets. For example, the information about rotation (on RotatedMNIST) is the most visible on the lower layers, while the information about style (on VLCS and PACS) is the most visible on the middle layers. In addition, the high probing results correlate to the domain generalization performances, leading to further directions in developing OOD generalization systems.
翻译:广化外部空间( OOD) 的能力是深层神经网络开发的一个重要目标, 研究人员从不同基础提出了许多高性能 OOD 常规化方法。 许多 OOD 算法在各种假设中表现良好, 这些系统被评为“ 黑盒子 ” 。 相反, 我们提议了一个灵活的框架, 使用一个预测中间表示器源域的测试模块, 以细微颗粒化来评估 OOD 系统。 我们发现, 演示总是将一些关于域的信息编码。 虽然不同OOD 算法的分层编码模式基本保持稳定, 但它们在数据集中各有不同之处。 例如, 有关旋转的信息( 在 RangdMNITS 上) 在下层最为明显, 而关于风格的信息( 在 VLCS 和 PACS ) 在中层最明显。 此外, 高的预测结果与域一般化性能相关, 导致进一步开发 OOD 通用系统 。