Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit, due to their ability to efficiently solve complex problems. Nevertheless, DNNs are highly opaque, and can behave in an unexpected manner when they encounter unfamiliar input. One promising approach for addressing this challenge is by extending DNN-based systems with hand-crafted override rules, which override the DNN's output when certain conditions are met. Here, we advocate crafting such override rules using the well-studied scenario-based modeling paradigm, which produces rules that are simple, extensible, and powerful enough to ensure the safety of the DNN, while also rendering the system more translucent. We report on two extensive case studies, which demonstrate the feasibility of the approach; and through them, propose an extension to scenario-based modeling, which facilitates its integration with DNN components. We regard this work as a step towards creating safer and more reliable DNN-based systems and models.
翻译:深神经网络(DNNs)已成为软件开发工具包中的一个关键工具,因为它们有能力有效解决复杂问题。然而,DNNs高度不透明,在遇到不熟悉的投入时可以出乎意料地行事。 应对这一挑战的一个有希望的方法是扩大DNN系统,采用手工制作的超文本规则,在某些条件得到满足时超越DNN的输出。在这里,我们主张使用经过周密研究的情景模型模式来制定这种超文本规则,这种模式产生简单、可扩展和足以确保DNN安全的规则,同时也使系统更加透明。我们报告了两个广泛的案例研究,展示了这一方法的可行性;并通过它们建议扩大基于情景的模型,从而便利它与DNN的组件融合。我们认为这项工作是朝着创建更安全、更可靠的DNN的系统和模型迈出的一步。