Many adversarial attacks in NLP perturb inputs to produce visually similar strings ('ergo' $\rightarrow$ '$\epsilon$rgo') which are legible to humans but degrade model performance. Although preserving legibility is a necessary condition for text perturbation, little work has been done to systematically characterize it; instead, legibility is typically loosely enforced via intuitions around the nature and extent of perturbations. Particularly, it is unclear to what extent can inputs be perturbed while preserving legibility, or how to quantify the legibility of a perturbed string. In this work, we address this gap by learning models that predict the legibility of a perturbed string, and rank candidate perturbations based on their legibility. To do so, we collect and release LEGIT, a human-annotated dataset comprising the legibility of visually perturbed text. Using this dataset, we build both text- and vision-based models which achieve up to $0.91$ F1 score in predicting whether an input is legible, and an accuracy of $0.86$ in predicting which of two given perturbations is more legible. Additionally, we discover that legible perturbations from the LEGIT dataset are more effective at lowering the performance of NLP models than best-known attack strategies, suggesting that current models may be vulnerable to a broad range of perturbations beyond what is captured by existing visual attacks. Data, code, and models are available at https://github.com/dvsth/learning-legibility-2023.
翻译:在 NLP 上的许多对抗性攻击中, NLP perturb 输入中的许多对抗性攻击, 以产生视觉相似的字符串( “ergo” $\rightrow $ $\ epsilon$rgo ” ), 这些字符串是人类可以看懂的, 但却会降低模型的性能。 虽然保持可辨性是文字扰动的一个必要条件, 但却没有做多少工作来系统地描述它; 相反, 通常通过围绕视觉扰动的性质和程度的直觉来松散地执行可辨的可辨识性。 特别是, 在保存可辨识性的同时, 或如何量化一个可辨识的隐形字符串( “ ergo' $right $ $ $ $ $ $ $20 $\ $\ $\ $\ $\ r rgo rgo rgo ) 。 在这项工作中, 我们通过预测可辨识的可辨识化的可辨识取性模型, 0.86 rofle ropeal stration stration strual stration rodestration as the das made the das laview das made das made laview das made laveal deal lavel dations</s>