Recent studies show near-perfect language-based predictions of depression scores (R2 = .70), but these "Mirror" models rely on language responses directly from depression assessments to predict depression assessment scores. These methods suffer from criterion contamination that inflate prediction estimates. We compare "Mirror" models to "Non-Mirror" models, which use other external language to predict depression scores. 110 participants completed both structured diagnostic (Mirror condition) and life history (Non-Mirror condition) interviews. LLMs were prompted to predict diagnostic depression scores. As expected, Mirror models were near-perfect. However, Non-Mirror models also displayed prediction sizes considered large in psychology. Further, both Mirror and Non-Mirror predictions correlated with other questionnaire-based depression symptoms at similar sizes, suggesting bias in Mirror models. Topic modeling revealed different theme structures across model types. As language models for depression continue to evolve, incorporating Non-Mirror approaches may support more valid and clinically useful language-based AI applications in psychological assessment.
翻译:暂无翻译