Many adversarial attacks in NLP perturb inputs to produce visually similar strings ('ergo' $\rightarrow$ '$\epsilon$rgo') which are legible to humans but degrade model performance. Although preserving legibility is a necessary condition for text perturbation, little work has been done to systematically characterize it; instead, legibility is typically loosely enforced via intuitions around the nature and extent of perturbations. Particularly, it is unclear to what extent can inputs be perturbed while preserving legibility, or how to quantify the legibility of a perturbed string. In this work, we address this gap by learning models that predict the legibility of a perturbed string, and rank candidate perturbations based on their legibility. To do so, we collect and release \dataset, a human-annotated dataset comprising the legibility of visually perturbed text. Using this dataset, we build both text- and vision-based models which achieve up to $0.91$ F1 score in predicting whether an input is legible, and an accuracy of $0.86$ in predicting which of two given perturbations is more legible. Additionally, we discover that legible perturbations from the \dataset dataset are more effective at lowering the performance of NLP models than best-known attack strategies, suggesting that current models may be vulnerable to a broad range of perturbations beyond what is captured by existing visual attacks. Data, code, and models are available at https://github.com/dvsth/learning-legibility-2023.
翻译:NLP perturb 输入中的许多对抗性攻击, 以产生视觉相似的字符串( “ ergo” $\rightrow $ $\ epsilon$rgo ” ), 这些字符串是人类可以理解的, 但却会降低模型的性能。 虽然保持可辨识性是文字扰动的一个必要条件, 但却没有做多少工作来系统地定性它; 相反, 通常通过直觉围绕视觉扰动的性质和范围来松散地加强可辨识性。 特别是, 不清楚在保存可辨识性的同时, 或如何量化一个可辨识的隐形字符串( “ ergo ” $$\ $\ $\ epsilonqourlon ) 的可辨识性。 在这项工作中, 我们通过预测一个可辨识识读性能的可辨性能和可辨性能更低的可辨度模型, 在预测一个可辨度的可辨识性能的可辨度的可辨性模型/ F1 值中, 在预测的可辨度上, 0.866 数据中, 的可辨度的可辨度是两种可辨测的可辨识取的可辨取性能, 。</s>