Open-weights large language models remain difficult to deploy for Thai due to unstable generation under complex instructions, despite strong English performance. To mitigate these limitations, We present SiamGPT-32B, an open-weights model based on Qwen3-32B, fine-tuned with a Quality-First strategy emphasizing curated supervision over data scale. The fine-tuning pipeline combines translated high-complexity English instruction data with a Thai-adapted AutoIF framework for instruction and linguistic constraints. Using supervised fine-tuning only, without continual pretraining or corpus expansion, SiamGPT-32B improves instruction adherence, multi-turn robustness, and linguistic stability. Evaluations on the SEA-HELM benchmark show that SiamGPT-32B achieves the strongest overall performance among similar-scale open-weights Thai models, with consistent gains in instruction following, multi-turn dialogue, and natural language understanding.
翻译:尽管在英语任务上表现出色,但开源大语言模型在复杂指令下生成泰语文本时仍存在不稳定的问题,导致其难以实际部署。为缓解这些限制,我们提出了SiamGPT-32B——一个基于Qwen3-32B的开源模型,采用"质优优先"策略进行微调,强调对数据质量的精细监督而非单纯扩大数据规模。该微调流程结合了翻译后的高复杂度英语指令数据,以及适配泰语特性的AutoIF框架,用于施加指令与语言约束。仅通过监督微调(无需持续预训练或语料扩展),SiamGPT-32B在指令遵循、多轮对话鲁棒性和语言稳定性方面均得到提升。在SEA-HELM基准测试上的评估表明,SiamGPT-32B在同等规模的开源泰语模型中取得了最佳综合性能,在指令遵循、多轮对话和自然语言理解任务上均展现出稳定优势。