Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications, including human robot interactions and automated driving. Real-world implementations will need to guarantee their robustness against hardware soft errors corrupting the underlying platform memory. Based on the previously observed efficacy of activation clipping techniques, we build a prototypical safety case for classifier CNNs by demonstrating that range supervision represents a highly reliable fault detector and mitigator with respect to relevant bit flips, adopting an eight-exponent floating point data representation. We further explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors. As a safety-relevant end-to-end use case, we showcase the benefit of our approach in a vehicle classification scenario, using ResNet-50 and the traffic camera data set MIOVision. The quantitative evidence provided in this work can be leveraged to inspire further and possibly more complex CNN safety arguments.
翻译:进化神经网络(CNNs)已成为许多安全关键计算机视觉应用程序的固定部分,包括人类机器人互动和自动驱动。现实世界的实施需要保证其稳健性,防止硬件软错误腐蚀了基础平台内存。根据先前观察到的启动剪切技术的功效,我们为分类CNN建立了一种原型安全案例,通过显示范围监督代表了相关点翻转的高度可靠的故障检测器和缓解器,采用了八度耗尽的浮点数据代表。我们进一步探索了新颖的非统一范围限制方法,有效抑制了无声数据腐败和无法纠正错误的概率。作为一个与安全相关的终端到终端使用案例,我们用ResNet-50和交通相机数据集MIOVision展示了我们的方法在车辆分类情景中的好处。这项工作中提供的数量证据可以被利用,以激励更多、可能更复杂的CNN安全论点。