The confluence of Artificial Intelligence and Computational Psychology presents an opportunity to model, understand, and interact with complex human psychological states through computational means. This paper presents a comprehensive, multi-faceted framework designed to bridge the gap between isolated predictive modeling and an interactive system for psychological analysis. The methodology encompasses a rigorous, end-to-end development lifecycle. First, foundational performance benchmarks were established on four diverse psychological datasets using classical machine learning techniques. Second, state-of-the-art transformer models were fine-tuned, a process that necessitated the development of effective solutions to overcome critical engineering challenges, including the resolution of numerical instability in regression tasks and the creation of a systematic workflow for conducting large-scale training under severe resource constraints. Third, a generative large language model (LLM) was fine-tuned using parameter-efficient techniques to function as an interactive "Personality Brain." Finally, the entire suite of predictive and generative models was architected and deployed as a robust, scalable microservices ecosystem. Key findings include the successful stabilization of transformer-based regression models for affective computing, showing meaningful predictive performance where standard approaches failed, and the development of a replicable methodology for democratizing large-scale AI research. The significance of this work lies in its holistic approach, demonstrating a complete research-to-deployment pipeline that integrates predictive analysis with generative dialogue, thereby providing a practical model for future research in computational psychology and human-AI interaction.
翻译:人工智能与计算心理学的交汇为通过计算手段建模、理解和交互复杂人类心理状态提供了机遇。本文提出一个全面、多层面的框架,旨在弥合孤立预测建模与心理分析交互系统之间的鸿沟。该方法涵盖一个严谨的端到端开发生命周期。首先,利用经典机器学习技术在四个多样化心理学数据集上建立了基础性能基准。其次,对最先进的Transformer模型进行了微调,这一过程需要开发有效解决方案以克服关键工程挑战,包括解决回归任务中的数值不稳定性,以及创建在严重资源约束下进行大规模训练的系统化工作流程。第三,采用参数高效技术微调了一个生成式大语言模型(LLM),使其作为交互式“人格大脑”运行。最后,整套预测与生成模型被架构并部署为一个鲁棒、可扩展的微服务生态系统。关键成果包括成功稳定了用于情感计算的基于Transformer的回归模型,在标准方法失效时展现出有意义的预测性能,以及开发了一种可复现的方法论以普及大规模人工智能研究。本工作的意义在于其整体性方法,展示了一个完整的研究到部署流程,将预测分析与生成式对话相集成,从而为计算心理学和人机交互的未来研究提供了一个实用模型。