Neural Architecture Search (NAS) is a popular tool for automatically generating Neural Network (NN) architectures. In early NAS works, these tools typically optimized NN architectures for a single metric, such as accuracy. However, in the case of resource constrained Machine Learning, one single metric is not enough to evaluate a NN architecture. For example, a NN model achieving a high accuracy is not useful if it does not fit inside the flash memory of a given system. Therefore, recent works on NAS for resource constrained systems have investigated various approaches to optimize for multiple metrics. In this paper, we propose that, on top of these approaches, it could be beneficial for NAS optimization of resource constrained systems to also consider input data granularity. We name such a system "Data Aware NAS", and we provide experimental evidence of its benefits by comparing it to traditional NAS.
翻译:神经网络结构搜索 (NAS) 是一种常用的自动化神经网络生成工具。在早期的 NAS 中,这些工具通常会优化单一度量,如精度。然而,在资源受限制的机器学习情况下,仅一个指标并不能完全评估一个神经网络结构。例如,如果一个高精度的神经网络模型不能在给定系统的闪存中适应,那么它就无法使用。因此,最近的 NAS 的研究探讨了优化多个指标的各种方法。在本文中,我们建议对于资源受限制的系统的 NAS 优化,除考虑以上方法外,还应考虑输入数据的粒度。我们将这样的系统命名为 "Data Aware NAS",通过将其与传统 NAS 进行比较,提供实验证据。