Meta-learning from learning curves is an important yet often neglected research area in the Machine Learning community. We introduce a series of Reinforcement Learning-based meta-learning challenges, in which an agent searches for the best suited algorithm for a given dataset, based on feedback of learning curves from the environment. The first round attracted participants both from academia and industry. This paper analyzes the results of the first round (accepted to the competition program of WCCI 2022), to draw insights into what makes a meta-learner successful at learning from learning curves. With the lessons learned from the first round and the feedback from the participants, we have designed the second round of our challenge with a new protocol and a new meta-dataset. The second round of our challenge is accepted at the AutoML-Conf 2022 and currently ongoing .
翻译:从学习曲线中进行元化学习是机器学习界的一个重要但常常被忽视的研究领域。我们提出了一系列加强学习的元学习挑战,其中代理商根据环境学习曲线的反馈,为特定数据集寻找最适合的算法。第一轮吸引了学术界和工业界的参与者。本文件分析了第一轮(被2022年世界职业分类中心竞争方案所接受的)的结果,以深入了解如何使元学习者成功地从学习曲线中学习。根据第一轮的经验教训和参与者的反馈,我们设计了第二轮挑战,制定了新的协议和新的元数据集。第二轮挑战在2022年AutomaML-Conf(Automal-Conf)(AutoML-Conf 2022)中被接受,目前正在进行中。