Millions of cameras at edge are being deployed to power a variety of different deep learning applications. However, the frames captured by these cameras are not always pristine - they can be distorted due to lighting issues, sensor noise, compression etc. Such distortions not only deteriorate visual quality, they impact the accuracy of deep learning applications that process such video streams. In this work, we introduce AQuA, to protect application accuracy against such distorted frames by scoring the level of distortion in the frames. It takes into account the analytical quality of frames, not the visual quality, by learning a novel metric, classifier opinion score, and uses a lightweight, CNN-based, object-independent feature extractor. AQuA accurately scores distortion levels of frames and generalizes to multiple different deep learning applications. When used for filtering poor quality frames at edge, it reduces high-confidence errors for analytics applications by 17%. Through filtering, and due to its low overhead (14ms), AQuA can also reduce computation time and average bandwidth usage by 25%.
翻译:边缘的数百万个相机正在被部署,以驱动各种不同的深层学习应用程序。然而,这些相机所捕捉的框架并不总是纯净的,它们可能由于照明问题、传感器噪音、压缩等原因而被扭曲。这种扭曲不仅会降低视觉质量,而且会影响处理这些视频流的深层学习应用程序的准确性。在这项工作中,我们引入了AQuA,通过评分框架的扭曲程度来保护应用的准确性,以防范这种扭曲框架。它考虑到框架的分析质量,而不是视觉质量,通过学习新颖的衡量标准、分类意见评分,并使用轻量的、CNN基的、视目标独立的地物提取器。AQuA准确评分了框架的扭曲程度,并概括了多种不同的深层学习应用程序。当用于过滤边缘的低质量框架时,它会将分析应用的高信任误差减少17%。通过过滤,并且由于低端(14米),AQuA也可以将计算时间和平均带宽使用率减少25%。