We propose a simple and scalable approach to causal representation learning for multitask learning. Our approach requires minimal modification to existing ML systems, and improves robustness to target shift. The improvement comes from mitigating unobserved confounders that cause the targets, but not the input. We refer to them as target-causing confounders. These confounders induce spurious dependencies between the input and targets. This poses a problem for the conventional approach to multitask learning, due to its assumption that the targets are conditionally independent given the input. Our proposed approach takes into account the dependencies between the targets in order to alleviate target-causing confounding. All that is required in addition to usual practice is to estimate the joint distribution of the targets to switch from discriminative to generative classification, and to predict all targets jointly. Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to target shift.
翻译:我们为多任务学习提出一个简单、可扩展的因果代表学习方法。 我们的方法要求对现有 ML 系统进行最低限度的修改,提高目标转移的稳健性。 改进来自减轻导致目标的未观察到的困惑者, 而不是投入。 我们称它们为引起目标的困惑者。 这些困惑者在投入和目标之间产生虚假的依赖性。 这给多任务学习的传统方法带来了问题, 因为它假定目标在输入时是有条件独立的。 我们提议的方法考虑到目标之间的依赖性, 以减轻目标导致的混乱。 除了通常的做法外, 所需要的是估计目标的联合分布, 从歧视性分类转向基因分类, 并共同预测所有目标。 我们关于人和任务数据集的属性结果反映了对目标转移的稳健性概念的改进。