Fairness in AI-driven stress detection is critical for equitable mental healthcare, yet existing models frequently exhibit gender bias, particularly in data-scarce scenarios. To address this, we propose FairM2S, a fairness-aware meta-learning framework for stress detection leveraging audio-visual data. FairM2S integrates Equalized Odds constraints during both meta-training and adaptation phases, employing adversarial gradient masking and fairness-constrained meta-updates to effectively mitigate bias. Evaluated against five state-of-the-art baselines, FairM2S achieves 78.1% accuracy while reducing the Equal Opportunity to 0.06, demonstrating substantial fairness gains. We also release SAVSD, a smartphone-captured dataset with gender annotations, designed to support fairness research in low-resource, real-world contexts. Together, these contributions position FairM2S as a state-of-the-art approach for equitable and scalable few-shot stress detection in mental health AI. We release our dataset and FairM2S publicly with this paper.
翻译:人工智能驱动的压力检测中的公平性对于实现公平的心理健康照护至关重要,然而现有模型常表现出性别偏见,尤其在数据稀缺场景下。为解决此问题,我们提出FairM2S,一种利用音频-视觉数据的公平性感知元学习框架用于压力检测。FairM2S在元训练和适应阶段均整合了均衡几率约束,采用对抗性梯度掩蔽和公平性约束元更新以有效减轻偏见。与五种先进基线方法相比,FairM2S实现了78.1%的准确率,同时将均衡机会降低至0.06,展现出显著的公平性提升。我们还发布了SAVSD,一个带有性别标注的智能手机采集数据集,旨在支持低资源现实场景下的公平性研究。这些贡献共同使FairM2S成为心理健康人工智能中公平且可扩展的少样本压力检测的先进方法。我们随本文公开发布数据集和FairM2S。