Self-hosting large language models (LLMs) is increasingly appealing for organizations seeking privacy, cost control, and customization. Yet deploying and maintaining in-house models poses challenges in GPU utilization, workload routing, and reliability. We introduce Pick and Spin, a practical framework that makes self-hosted LLM orchestration scalable and economical. Built on Kubernetes, it integrates a unified Helm-based deployment system, adaptive scale-to-zero automation, and a hybrid routing module that balances cost, latency, and accuracy using both keyword heuristics and a lightweight DistilBERT classifier. We evaluate four models, Llama-3 (90B), Gemma-3 (27B), Qwen-3 (235B), and DeepSeek-R1 (685B) across eight public benchmark datasets, with five inference strategies, and two routing variants encompassing 31,019 prompts and 163,720 inference runs. Pick and Spin achieves up to 21.6% higher success rates, 30% lower latency, and 33% lower GPU cost per query compared with static deployments of the same models.
翻译:自托管大语言模型(LLMs)因其在隐私保护、成本控制与定制化方面的优势,正日益受到寻求数据自主组织的青睐。然而,内部模型的部署与维护在GPU利用率、工作负载路由及系统可靠性方面仍面临挑战。本文提出Pick and Spin——一个使自托管LLM编排具备可扩展性与经济性的实用框架。该框架基于Kubernetes构建,集成了统一的Helm部署系统、自适应缩容至零的自动化机制,以及一个混合路由模块。该模块通过关键词启发式规则与轻量级DistilBERT分类器的结合,在成本、延迟与准确性之间实现动态权衡。我们在八个公共基准数据集上评估了四个模型——Llama-3 (90B)、Gemma-3 (27B)、Qwen-3 (235B)和DeepSeek-R1 (685B),涵盖五种推理策略与两种路由变体,共涉及31,019条提示词与163,720次推理运行。实验表明,相较于相同模型的静态部署方案,Pick and Spin可实现高达21.6%的成功率提升、30%的延迟降低以及每查询33%的GPU成本节约。