One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet. The models and their evaluation codes are made publicly available online http://github.com/fairnas/FairNAS .
翻译:重力共享神经结构搜索中最重要的问题之一是在预定的搜索空间内评估候选模型。 在实践中, 单发超级网被训练为评估员。 忠诚的排名当然会导致更准确的搜索结果。 但是, 目前的方法容易做出错误的判断。 在本文中, 我们证明, 它们的偏向性评价是由于超级网培训中固有的不公平。 有鉴于此, 我们提出两个层面的制约: 期望公平和严格公平。 特别是, 严格公平确保整个培训过程中所有选择区获得平等优化机会, 既不高估也不低估其能力。 我们证明这对提高模型排名的信心至关重要。 将根据拟议中的公平限制而培训的单发超级网纳入一个多目标进化搜索算法, 我们获得了各种最先进的模型, 例如, FairNAS-A在图像网上达到了77.5%的最高一级验证准确率。 这些模型及其评价代码可在网上公开查阅 http://github.com/fairnas/FairNAS。