Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications with competitive performance. In practice, to solve the nontrivial and complicated tasks in real-world applications, DL is often not used standalone, but instead contributes as a piece of gadget of a larger complex AI system. Although there comes a fast increasing trend to study the quality issues of deep neural networks (DNNs) at the model level, few studies have been performed to investigate the quality of DNNs at both the unit level and the potential impacts on the system level. More importantly, it also lacks systematic investigation on how to perform the risk assessment for AI systems from unit level to system level. To bridge this gap, this paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles to address these issues. We propose a general framework with an exploratory study for analyzing AI systems. After large-scale (700+ experimental configurations and 5000+ GPU hours) experiments and in-depth investigations, we reached a few key interesting findings that highlight the practical need and opportunities for more in-depth investigations into AI systems.
翻译:深入学习(DL)已成为驱动力,并被广泛应用于许多领域和具有竞争性业绩的应用。实际上,为了解决现实世界应用中非技术性和复杂的任务,DL往往不是单独使用,而是作为较复杂的AI系统的一个小装置。虽然在模型一级研究深神经网络质量问题的趋势迅速增加,但为了调查单位一级DNN的质量和对系统一级的潜在影响,进行了很少的研究。更重要的是,为了弥补这一差距,DL发起了对AI系统风险评估的早期探索性研究,从数据分布和不确定性角度来解决这些问题。我们提出了一个用于分析AI系统探索性研究的总框架。在进行了大规模(700+实验配置和5000+GPU小时)的实验和深入调查之后,我们得出了几项关键令人感兴趣的结论,强调对AI系统进行更深入调查的实际需要和机会。