The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space. We use contrastive learning to identify networks across different initializations based on their data Jacobians, and automatically produce the first architecture embeddings independent from the parametrization of the search space. Using our contrastive embeddings, we show that traditional black-box optimization algorithms, without modification, can reach state-of-the-art performance in Neural Architecture Search. As our method provides a unified embedding space, we perform for the first time transfer learning between search spaces. Finally, we show the evolution of embeddings during training, motivating future studies into using embeddings at different training stages to gain a deeper understanding of the networks in a search space.
翻译:神经结构搜索的算法性能在很大程度上取决于搜索空间的平衡化。 我们使用对比性学习来根据搜索空间的数据识别不同初始化的网络, 并自动生成第一个独立于搜索空间的平衡化的嵌入器。 我们使用对比性嵌入器, 显示传统的黑盒优化算法可以不作修改地达到神经结构搜索的最新功能。 由于我们的方法提供了一个统一的嵌入空间, 我们第一次在搜索空间之间进行学习。 最后, 我们展示了在培训期间嵌入的演进, 激励未来研究在不同的培训阶段使用嵌入器来加深对搜索空间网络的理解。