Models that accurately detect depression from text are important tools for addressing the post-pandemic mental health crisis. BERT-based classifiers' promising performance and the off-the-shelf availability make them great candidates for this task. However, these models are known to suffer from performance inconsistencies and poor generalization. In this paper, we introduce the DECK (DEpression ChecKlist), depression-specific model behavioural tests that allow better interpretability and improve generalizability of BERT classifiers in depression domain. We create 23 tests to evaluate BERT, RoBERTa and ALBERT depression classifiers on three datasets, two Twitter-based and one clinical interview-based. Our evaluation shows that these models: 1) are robust to certain gender-sensitive variations in text; 2) rely on the important depressive language marker of the increased use of first person pronouns; 3) fail to detect some other depression symptoms like suicidal ideation. We also demonstrate that DECK tests can be used to incorporate symptom-specific information in the training data and consistently improve generalizability of all three BERT models, with an out-of-distribution F1-score increase of up to 53.93%.
翻译:精确地从文本中检测抑郁症的模型是解决抑郁症后心理健康危机的重要工具。基于BERT的分类人员有希望的表现和现成的可用性,使他们成为执行这项任务的绝佳人选。然而,这些模型已知存在性能不一致和笼统化的情况。在本文中,我们引入了DEK(DEpression ChecKlist),针对抑郁症的模型行为测试,使BERT分类人员在抑郁症域内的可更好解释性并提高其可普遍性。我们为评估BERT、RoBERTA和ALBERT抑郁症分类人员在两个基于Twitter的和一个基于临床访谈的数据集方面创造了23个测试。我们的评估表明,这些模型:1)对文本中某些性别敏感的变异十分强大;2)依赖于增加第一人Pronoun使用的重要压抑性语言标记;3)未能发现自杀概念等其他抑郁症症状。我们还表明,DEK测试可用于将特定症状的信息纳入培训数据,并不断改进所有三个BERT模型的可通用性,其中的F-1-93核心增长至53 %。