Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field.
翻译:机器学习研究在多个方面都取得了进步,包括模型结构和学习方法。将这种研究自动化的努力,称为Automal,也取得了显著的进展。但是,这种进展主要集中于神经网络的结构结构,它依赖精密的专家设计的层层作为建筑构件,或类似的限制性搜索空间。我们的目标是表明Automal可以走得更远:今天有可能自动发现完整的机器学习算法,只是利用基本数学操作作为构件。我们通过引入一个通过通用搜索空间大大降低人类偏向的新框架来证明这一点。尽管这个空间很广,但进化性搜索仍然可以发现通过反向分析训练的两层神经网络。然后,这些简单的神经网络可以通过直接在感兴趣的任务上演进化而超越,例如,CIFAR-10变体,在顶级算法中出现现代技术,例如双线互动、归正式梯度梯度和平均重量。此外,进化使算法适应了不同的任务类型:例如,在掌握少量数据时出现类似退出的技术。我们相信,在从抓中发现机器学习新方向方面的这些初步成功。