With the increased awareness of situations of mental crisis and their societal impact, online services providing emergency support are becoming commonplace in many countries. Computational models, trained on discussions between help-seekers and providers, can support suicide prevention by identifying at-risk individuals. However, the lack of domain-specific models, especially in low-resource languages, poses a significant challenge for the automatic detection of suicide risk. We propose a model that combines pre-trained language models (PLM) with a fixed set of manually crafted (and clinically approved) set of suicidal cues, followed by a two-stage fine-tuning process. Our model achieves 0.91 ROC-AUC and an F2-score of 0.55, significantly outperforming an array of strong baselines even early on in the conversation, which is critical for real-time detection in the field. Moreover, the model performs well across genders and age groups.
翻译:随着对精神危机及其社会影响的认识的提高,提供紧急支助的在线服务在许多国家变得司空见惯。关于帮助寻求者和提供者之间讨论的计算模型,经过培训,能够通过识别面临风险的个人来支持预防自杀。然而,缺乏具体领域的模型,特别是低资源语言模式,对自动发现自杀风险构成重大挑战。我们提出了一个模型,将预先培训的语言模型与一套固定的手工制作(和临床核准的)自杀提示组合结合起来,然后是两阶段的微调过程。我们的模型实现了0.91 ROC-AUC和0.55的F2-C核心,大大超过了在谈话中甚至早期的一系列强力基线,而这些基线对实地实时检测至关重要。此外,该模型在性别和年龄组之间运行良好。