Instruction tuning has become an essential process for optimizing the performance of large language models (LLMs). However, current text-to-text instruction tuning methods, referred to as TextTuning, exhibit significant limitations in terms of generalization, robustness, and controllability, primarily due to the absence of explicit task structures. In this paper, we introduce JsonTuning, a novel structure-to-structure approach for instruction tuning. By utilizing the versatile and structured format of JSON to represent tasks, JsonTuning enhances generalization by enabling the model to comprehend essential task elements and their interrelations, improves robustness by reducing ambiguity, and increases controllability by providing explicit control over the output. We conduct a comprehensive comparative analysis between JsonTuning and TextTuning using various language models and evaluation benchmarks. Our experimental results demonstrate that JsonTuning consistently outperforms TextTuning across a range of applications, showing marked improvements in performance, robustness, and controllability. By addressing the inherent limitations of TextTuning, JsonTuning reveals significant potential for developing more effective and reliable LLMs capable of managing diverse scenarios.
翻译:暂无翻译