To understand how well a large language model captures certain semantic or syntactic features, researchers typically apply probing classifiers. However, the accuracy of these classifiers is critical for the correct interpretation of the results. If a probing classifier exhibits low accuracy, this may be due either to the fact that the language model does not capture the property under investigation, or to shortcomings in the classifier itself, which is unable to adequately capture the characteristics encoded in the internal representations of the model. Consequently, for more effective diagnosis, it is necessary to use the most accurate classifiers possible for a particular type of task. Logistic regression on the output representation of the transformer neural network layer is most often used to probing the syntactic properties of the language model. We show that using gradient boosting decision trees at the Knowledge Neuron layer, i.e., at the hidden layer of the feed-forward network of the transformer as a probing classifier for recognizing parts of a sentence is more advantageous than using logistic regression on the output representations of the transformer layer. This approach is also preferable to many other methods. The gain in error rate, depending on the preset, ranges from 9-54%
翻译:暂无翻译