Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for high-performance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lower-level problem) and various hyperparameters such as the configuration of the architecture over the validation data (upper-level problem). This paper explores the statistical aspects of such problems with train-validation splits. In practice, the lower-level problem is often overparameterized and can easily achieve zero loss. Thus, a-priori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of train-validation split. To this aim this work establishes the following results. (1) We show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss. This reveals that the upper-level problem helps select the most generalizable model and prevent overfitting with a near-minimal validation sample size. Importantly, this is established for continuous spaces -- which are highly relevant for popular differentiable search schemes. (2) We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradient-descent, we show that the train-validation procedure returns the best (model, architecture) pair even if all architectures can perfectly fit the training data to achieve zero error. (3) Finally, we highlight rigorous connections between NAS, multiple kernel learning, and low-rank matrix learning. The latter leads to novel algorithmic insights where the solution of the upper problem can be accurately learned via efficient spectral methods to achieve near-minimal risk.
翻译:神经架构搜索(NAS) 是自动设计优化架构以进行高性能深层次学习的流行方法。 在这种方法中, 通常使用双级优化, 优化模型对培训数据( 低级问题) 和各种超参数( 如校正数据( 上级问题) 的配置) 优化模型的权重。 本文探讨这些问题的统计方面, 包括火车校正分解。 在实践中, 低级问题往往过分分解, 可以很容易地实现零损耗。 因此, 首先, 似乎不可能仅仅根据培训失败来区分基于培训失败的右超参数, 从而促使人们更好地了解培训- 校正值分解的作用。 为此, 这项工作可以确立以下结果:(1) 校正性损失的精度属性, 如风险和高度偏差, 高度测试损失。 这显示了上级问题有助于选择最通用的模型, 防止校正的校正量过低的标度标度标度样本大小。 。 ( ) 关键的是,, 即使是在更近的空空的空格上,, 这可以导致更好地了解培训- 精确的连接,,, 我们的校正级的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校的校正的校正的校正的校会将最终的校正的校会将最终的校的校会会会会会会会会会会会会会会会将的校, 。