Identifying words that impact a task's performance more than others is a challenge in natural language processing. Transformers models have recently addressed this issue by incorporating an attention mechanism that assigns greater attention (i.e., relevance) scores to some words than others. Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints. This limitation applies to many transformers, including the well-known bidirectional encoder representations of the transformer (BERT) model. In this paper, we examined BERT's attention assignment mechanism, focusing on two questions: (1) How can attention be employed to reduce input length? (2) How can attention be used as a control mechanism for conditional text generation? We investigated these questions in the context of a text classification task. We discovered that BERT's early layers assign more critical attention scores for text classification tasks compared to later layers. We demonstrated that the first layer's attention sums could be used to filter tokens in a given sequence, considerably decreasing the input length while maintaining good test accuracy. We also applied filtering, which uses a compute-efficient semantic similarities algorithm, and discovered that retaining approximately 6\% of the original sequence is sufficient to obtain 86.5\% accuracy. Finally, we showed that we could generate data in a stable manner and indistinguishable from the original one by only using a small percentage (10\%) of the tokens with high attention scores according to BERT's first layer.
翻译:在自然语言处理过程中,对任务性能的影响大于对任务性能的影响的单词的识别是一个挑战。 变异模型最近通过将关注机制纳入一个关注机制来解决这一问题,这种关注机制将注意力用于减少输入长度? (2) 如何将注意力用作有条件的文本生成的控制机制? 由于注意机制的计算成本高,变异模型通常会因硬件限制而造成输入长度限制。 这一限制适用于许多变异器,包括众所周知的变异器(BERT)模型的双向编码器表示。 在本文中,我们检查了BERT的注意分配机制,侧重于两个问题:(1) 如何利用注意力减少输入长度? (2) 如何将注意力用作有条件文本生成的控制机制? 由于注意机制的计算成本高昂,变异模型的早期层次通常会给文本分类任务带来更关键的分数。 我们发现,第一个层的注意量只能用于按特定顺序过滤信号, 大量减少输入时间,同时保持良好的测试准确性。 我们还应用过滤器, 使用一种精确性精准的精准性结构, 最终我们用一个精确的精准性排序。</s>