With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations. Recently, many methods for verifying networks' general properties of robustness have been proposed, but they are mostly applied in Computer Vision. In this paper we propose a Verification specification for Natural Language Understanding classification based on larger regions of interest, and we discuss the challenges of such task. We observe that, although the data is almost linearly separable, the verifier struggles to output positive results and we explain the problems and implications.
翻译:随着深机学习扩散到实际应用中,这一技术的一个特殊特性引起了人们的注意:强健神经网络臭名昭著地表现出弱健,对小的输入干扰非常敏感。最近,提出了许多核实网络一般强健特性的方法,但大多用于计算机愿景。在本文件中,我们根据更多感兴趣的区域提出了《土著语言理解分类核查规格》,我们讨论了这种任务的挑战。我们注意到,尽管数据几乎线性可分离,核查者努力争取产生积极的结果,我们解释问题和影响。