Despite the recent success of large pretrained language models (LMs) on a variety of prompting tasks, these models can be alarmingly brittle to small changes in inputs or application contexts. To better understand such behavior and motivate the design of more robust LMs, we propose a general experimental framework, CALM (Competence-based Analysis of Language Models), where targeted causal interventions are utilized to damage an LM's internal representation of various linguistic properties in order to evaluate its use of each representation in performing a given task. We implement these interventions as gradient-based adversarial attacks, which (in contrast to prior causal probing methodologies) are able to target arbitrarily-encoded representations of relational properties, and carry out a case study of this approach to analyze how BERT-like LMs use representations of several relational properties in performing associated relation prompting tasks. We find that, while the representations LMs leverage in performing each task are highly entangled, they may be meaningfully interpreted in terms of the tasks where they are most utilized; and more broadly, that CALM enables an expanded scope of inquiry in LM analysis that may be useful in predicting and explaining weaknesses of existing LMs.
翻译:尽管最近就各种催促任务进行了大量预先培训的语文模型(LMS)取得了成功,但这些模型在投入或应用背景方面的变化很小,却令人震惊地变得微不足道。为了更好地理解这种行为,并激励设计更强的LMS,我们提议一个一般性的实验框架,即CALM(基于能力的语言模型分析),利用有针对性的因果干预来损害LM对各种语言特性的内部代表性,以便评价它在执行某项任务时对每项代表性的使用情况。我们将这些干预作为基于梯度的对抗性攻击来实施,这种攻击(与先前的因果检测方法不同)能够针对任意编码的关联特性的表达,并对这种方法进行案例研究,以分析像BERT一样的LMS如何利用若干关系属性的表述来执行相关促进任务。我们发现,虽然LMS在履行每项任务时的影响力非常纠缠不清,但从最常用的任务的角度来进行有意义的解释;更为广泛的是,CALM能够扩大LM系统分析的查询范围,这可能有助于预测和解释现有LMS的弱点。</s>