We introduce ES-ENAS, a simple yet general evolutionary joint optimization procedure by combining continuous optimization via Evolutionary Strategies (ES) and combinatorial optimization via Efficient NAS (ENAS) in a highly scalable and intuitive way. Our main insight is noticing that ES is already a highly distributed algorithm involving hundreds of forward passes which can not only be used for training neural network weights, but also for jointly training a NAS controller, both in a blackbox fashion. By doing so, we also bridge the gap from NAS research in supervised learning settings to the reinforcement learning scenario through this relatively simple marriage between two different yet common lines of research. We demonstrate the utility and effectiveness of our method over a large search space by training highly combinatorial neural network architectures for RL problems in continuous control, via edge pruning and quantization. We also incorporate a wide variety of popular techniques from modern NAS literature including multiobjective optimization along with various controller methods, to showcase their promise in the RL field and discuss possible extensions.
翻译:我们引入了ES-ENAS, 这是一种简单而一般的进化联合优化程序,通过进化战略(ES)和通过高效NAS(ENAS)的组合优化,以高度可伸缩和直观的方式,将连续优化结合起来。我们的主要见解是,ES-ENAS已经是一种高度分布式的算法,它涉及数百个远方传票,不仅可用于培训神经网络重量,而且可用于以黑盒方式联合培训NAS控制器。通过这样做,我们还弥补了从NAS在受监督的学习环境中的研究到通过两种不同但共同的研究线之间的相对简单的结合而强化学习情景之间的差距。我们通过通过通过边缘裁剪裁和裁分,培训高组合神经网络结构,在持续控制RL问题上展示了我们的方法对大搜索空间的实用性和有效性。我们还吸收了现代NAS文献的广泛流行技术,包括多种目标优化以及各种控制器方法,以展示其在RL领域的承诺并讨论可能的扩展。