This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of settings, but lacked fair comparisons. All input data modalities (time series, images, videos, text, tabular) were formatted as tensors and all tasks were multi-label classification problems. Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly. In this setting, DL methods dominated, though popular Neural Architecture Search (NAS) was impractical. Solutions relied on fine-tuned pre-trained networks, with architectures matching data modality. Post-challenge tests did not reveal improvements beyond the imposed time limit. While no component is particularly original or novel, a high level modular organization emerged featuring a "meta-learner", "data ingestor", "model selector", "model/learner", and "evaluator". This modularity enabled ablation studies, which revealed the importance of (off-platform) meta-learning, ensembling, and efficient data management. Experiments on heterogeneous module combinations further confirm the (local) optimality of the winning solutions. Our challenge legacy includes an ever-lasting benchmark (http://autodl.chalearn.org), the open-sourced code of the winners, and a free "AutoDL self-service".
翻译:本文报告了ChaLearn AutoDL 挑战系列的结果和挑战后分析,这些结果和挑战分析有助于分解在各种环境中引入的深入学习(DL)自动ML解决方案,但缺乏公平比较。所有输入数据模式(时间序列、图像、视频、文本、表格)都格式化为变压器,所有任务都是多标签分类问题。代码文件是在隐藏任务上执行的,时间和计算资源有限,推动获得结果的解决方案。在这种环境下,DL方法占主导地位,尽管流行的神经架构搜索(NAS)是不切实际的。解决方案依赖于经过精细调整的预培训的网络,并配有匹配数据模式的架构。 后 Challenge测试没有显示超过规定时限的改进。 虽然没有任何特别原始或新内容,但是出现了一个高层次的模块组织,其特点是“元-Learner”、“模选”、“模型/激光”和“蒸发”等解决方案。 这种模块化化化方法使得匹配研究能够揭示(非常规的)自我修正的自我学习(A-livoulal-comlial commlial commissional commal Real) comlistal delistrevolal deleglemental commissional delementalmentalmentalmentalmental dealmentalmentalmentalmentalmentalmentalmentalmental) a disal deal deal deal deal deal.