Machine learning security has recently become a prominent topic in the natural language processing (NLP) area. The existing black-box adversarial attack suffers prohibitively from the high model querying complexity, resulting in easily being captured by anti-attack monitors. Meanwhile, how to eliminate redundant model queries is rarely explored. In this paper, we propose a query-efficient approach BufferSearch to effectively attack general intelligent NLP systems with the minimal number of querying requests. In general, BufferSearch makes use of historical information and conducts statistical test to avoid incurring model queries frequently. Numerically, we demonstrate the effectiveness of BufferSearch on various benchmark text-classification experiments by achieving the competitive attacking performance but with a significant reduction of query quantity. Furthermore, BufferSearch performs multiple times better than competitors within restricted query budget. Our work establishes a strong benchmark for the future study of query-efficiency in NLP adversarial attacks.
翻译:暂无翻译