Machine learning has achieved tremendous success in a variety of domains in recent years. However, a lot of these success stories have been in places where the training and the testing distributions are extremely similar to each other. In everyday situations when models are tested in slightly different data than they were trained on, ML algorithms can fail spectacularly. This research attempts to formally define this problem, what sets of assumptions are reasonable to make in our data and what kind of guarantees we hope to obtain from them. Then, we focus on a certain class of out of distribution problems, their assumptions, and introduce simple algorithms that follow from these assumptions that are able to provide more reliable generalization. A central topic in the thesis is the strong link between discovering the causal structure of the data, finding features that are reliable (when using them to predict) regardless of their context, and out of distribution generalization.
翻译:近年来,机器学习在多个领域取得了巨大成功。然而,许多成功事例都发生在培训和测试分布极为相似的地方。在日常情况下,模型的测试数据与培训数据略有不同,ML算法会大失所望。这种研究试图正式界定这一问题,在数据中哪些假设是合理的,以及我们希望从数据中获得何种保障。然后,我们集中关注某类分配问题,它们的假设,并引入从这些假设中得出的简单算法,这些算法能够提供更可靠的概括化。 论文的一个中心主题是发现数据因果结构、找到可靠特征(在使用这些特征预测时),而不管其背景如何,以及分布的概括性之间有着紧密的联系。