This paper proposes a novel cell-based neural architecture search algorithm (NAS), which completely alleviates the expensive costs of data labeling inherited from supervised learning. Our algorithm capitalizes on the effectiveness of self-supervised learning for image representations, which is an increasingly crucial topic of computer vision. First, using only a small amount of unlabeled train data under contrastive self-supervised learning allow us to search on a more extensive search space, discovering better neural architectures without surging the computational resources. Second, we entirely relieve the cost for labeled data (by contrastive loss) in the search stage without compromising architectures' final performance in the evaluation phase. Finally, we tackle the inherent discrete search space of the NAS problem by sequential model-based optimization via the tree-parzen estimator (SMBO-TPE), enabling us to reduce the computational expense response surface significantly. An extensive number of experiments empirically show that our search algorithm can achieve state-of-the-art results with better efficiency in data labeling cost, searching time, and accuracy in final validation.
翻译:本文提出了一个新的基于细胞的神经结构搜索算法(NAS NAS ), 这个算法完全减轻了从监督学习中继承的数据标签成本。 我们的算法利用了自我监督的图像显示学习的有效性,这是计算机视觉中一个越来越关键的主题。 首先,在对比式自我监督的学习中,仅使用少量未贴标签的列车数据,使我们得以搜索更广泛的搜索空间,发现更好的神经结构,而不会超脱计算资源。 其次,我们完全降低了在搜索阶段的标签数据的成本(通过对比损失),同时不影响评估阶段的结构的最终性能。 最后,我们通过基于树分估计仪(SMBO-TPE)的连续模型优化来解决NAS问题固有的离散搜索空间,从而使我们能够大幅降低计算费用的反应面。 大量实验表明,我们的搜索算法能够以更高的数据标签效率、搜索时间和最终验证的准确性来达到最新水平。