Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for high-performance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lower-level problem) and various hyperparameters such as the configuration of the architecture over the validation data (upper-level problem). This paper explores the statistical aspects of such problems with train-validation splits. In practice, the lower-level problem is often overparameterized and can easily achieve zero loss. Thus, a-priori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of train-validation split. To this aim this work establishes the following results. (1) We show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss. This reveals that the upper-level problem helps select the most generalizable model and prevent overfitting with a near-minimal validation sample size. Importantly, this is established for continuous search spaces which are highly relevant for popular differentiable search schemes. (2) We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradient-descent, we show that the train-validation procedure returns the best (model, architecture) pair even if all architectures can perfectly fit the training data to achieve zero error. (3) Finally, we highlight rigorous connections between NAS, multiple kernel learning, and low-rank matrix learning. The latter leads to novel algorithmic insights where the solution of the upper problem can be accurately learned via efficient spectral methods to achieve near-minimal risk.
翻译:神经架构搜索(NAS) 是自动设计优化架构以进行高性能深层学习的流行方法。 在这种方法中,使用双级优化是常见的, 使用双级优化, 优化模型对培训数据( 低级问题) 和各种超参数( 如校正数据( 上级问题) 的配置) 优化模型的权重。 本文探讨这些问题的统计方面, 使用火车校验分解。 在实践中, 低级问题往往过分分解, 容易实现零损耗。 因此, 似乎不可能仅仅根据培训失败来区分正确的超值, 从而促使人们更好地了解培训- 校验数据( 低度问题) 。 为了实现这一目标, 风险和超升度等校正损失的精度性能, 我们用最精确的校正的校正的校正的校正的校正方法, 我们用最精确的校正的校正的校正的校正的校正流程, 我们用最精确的校正的校正的校正的校正的校正方法 。