Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.
翻译:因果关系、控制和强化学习方面的问题超越了i.d.观察下预测的典型机器学习任务。相反,这些领域考虑的是学习如何积极干扰一个系统,以便对响应变量产生某种影响的问题。可以说,它们对这一问题有互补的看法:在控制方面,通常目的是首先通过引证战略确定系统,然后应用基于模型的设计技术来控制系统。在(非模式的)强化学习中,一个直接优化奖励。在因果关系方面,一个重点是因果关系结构的可识别性。我们认为,合并不同观点可能会产生协同效应,而这种竞争意味着作为实现这种协同效应的第一步。参与者可以获得动态系统生成的观测和(offline)干预数据。CMontrom认为,在动态初期可以设定单一的冲动,而ControbO则考虑一个闭路问题,可以在每一个步骤中设定控制变量。两种轨道的目标是将控制系统推向一个理想的竞争/赢利性规则(httpsrecivolution code)是开放的,通过Restrodustrublegmental-Recregive-B。