Modern recommender systems face significant computational challenges due to growing model complexity and traffic scale, making efficient computation allocation critical for maximizing business revenue. Existing approaches typically simplify multi-stage computation resource allocation, neglecting inter-stage dependencies, thus limiting global optimality. In this paper, we propose MaRCA, a multi-agent reinforcement learning framework for end-to-end computation resource allocation in large-scale recommender systems. MaRCA models the stages of a recommender system as cooperative agents, using Centralized Training with Decentralized Execution (CTDE) to optimize revenue under computation resource constraints. We introduce an AutoBucket TestBench for accurate computation cost estimation, and a Model Predictive Control (MPC)-based Revenue-Cost Balancer to proactively forecast traffic loads and adjust the revenue-cost trade-off accordingly. Since its end-to-end deployment in the advertising pipeline of a leading global e-commerce platform in November 2024, MaRCA has consistently handled hundreds of billions of ad requests per day and has delivered a 16.67% revenue uplift using existing computation resources.
翻译:现代推荐系统因模型复杂度与流量规模的持续增长而面临显著的计算挑战,使得高效的计算分配对于最大化商业收益至关重要。现有方法通常简化多阶段计算资源分配问题,忽略了阶段间的依赖关系,从而限制了全局最优性。本文提出MaRCA,一种用于大规模推荐系统端到端计算资源分配的多智能体强化学习框架。MaRCA将推荐系统的各个阶段建模为协作智能体,采用集中式训练与分散式执行(CTDE)方法,在计算资源约束下优化收益。我们引入了AutoBucket TestBench以实现精确的计算成本估计,以及基于模型预测控制(MPC)的收益-成本平衡器,用于主动预测流量负载并相应调整收益-成本权衡。自2024年11月在领先全球电商平台的广告流水线中完成端到端部署以来,MaRCA持续稳定地处理每日数千亿次广告请求,并利用现有计算资源实现了16.67%的收益提升。