Along with the progress of AI democratization, neural networks are being deployed more frequently in edge devices for a wide range of applications. Fairness concerns gradually emerge in many applications, such as face recognition and mobile medical. One fundamental question arises: what will be the fairest neural architecture for edge devices? By examining the existing neural networks, we observe that larger networks typically are fairer. But, edge devices call for smaller neural architectures to meet hardware specifications. To address this challenge, this work proposes a novel Fairness- and Hardware-aware Neural architecture search framework, namely FaHaNa. Coupled with a model freezing approach, FaHaNa can efficiently search for neural networks with balanced fairness and accuracy, while guaranteed to meet hardware specifications. Results show that FaHaNa can identify a series of neural networks with higher fairness and accuracy on a dermatology dataset. Target edge devices, FaHaNa finds a neural architecture with slightly higher accuracy, 5.28x smaller size, 15.14% higher fairness score, compared with MobileNetV2; meanwhile, on Raspberry PI and Odroid XU-4, it achieves 5.75x and 5.79x speedup.
翻译:随着AI民主化的进展,神经网络被更频繁地部署在边缘设备中,用于广泛的应用。公平问题逐渐出现在许多应用中,如脸部识别和移动医疗。一个根本的问题是:边缘设备最公平的神经结构是什么?我们发现,更大的神经网络一般比较公平。但是,随着AI民主化的进展,边缘设备要求较小的神经结构符合硬件规格。为了应对这一挑战,这项工作提出了一个新的公平性和硬件智能神经结构搜索框架,即FaHaNa。 FaHaNa与模型冻结方法相结合,可以有效地寻找平衡、公平和准确的神经网络,同时保证符合硬件规格。结果显示,FaHana可以确定一系列在皮肤数据集上更加公正和准确的神经网络。目标边缘设备发现一个精度略高、5.28x较小、15.14%更公平的神经结构,与移动网络V2相比;同时,在Raspberry PI和Odroid XU-4上,它可以实现5.75x和5.79x速度。