Mental health assessments are of central importance to individuals' well-being. Conventional assessment methodologies predominantly depend on clinical interviews and standardised self-report questionnaires. Nevertheless, the efficacy of these methodologies is frequently impeded by factors such as subjectivity, recall bias, and accessibility issues. Furthermore, concerns regarding bias and privacy may result in misreporting in data collected through self-reporting in mental health research. The present study examined the design opportunities and challenges inherent in the development of a mental health assessment tool based on natural language interaction with large language models (LLMs). An interactive prototype system was developed using conversational AI for non-invasive mental health assessment, and was evaluated through semi-structured interviews with 11 mental health professionals (six counsellors and five psychiatrists). The analysis identified key design considerations for future development, highlighting how AI-driven adaptive questioning could potentially enhance the reliability of self-reported data while identifying critical challenges, including privacy protection, algorithmic bias, and cross-cultural applicability. This study provides an empirical foundation for mental health technology innovation by demonstrating the potential and limitations of natural language interaction in mental health assessment.
翻译:暂无翻译