The rapid development and usage of large-scale AI models by mobile users will dominate the traffic load in future communication networks. The advent of AI technology also facilitates a decentralized AI ecosystem where small organizations or even individuals can host AI services. In such scenarios, AI service (models) placement, selection, and request routing decisions are tightly coupled, posing a challenging yet fundamental trade-off between service quality and service latency, especially when considering user mobility. Existing solutions for related problems in mobile edge computing (MEC) and data-intensive networks fall short due to restrictive assumptions about network structure or user mobility. To bridge this gap, we propose a decentralized framework that jointly optimizes AI service placement, selection, and request routing. In the proposed framework, we use traffic tunneling to support user mobility without costly AI service migrations. To account for nonlinear queuing delays, we formulate a nonconvex problem to optimize the trade-off between service quality and end-to-end latency. We derive the node-level KKT conditions and develop a decentralized Frank--Wolfe algorithm with a novel messaging protocol. Numerical evaluations validate the proposed approach and show substantial performance improvements over existing methods.
翻译:移动用户大规模人工智能模型的快速发展和使用将在未来通信网络中占据主导流量负载。人工智能技术的兴起也促进了分散式人工智能生态系统的发展,使得小型组织甚至个人能够托管人工智能服务。在此类场景中,人工智能服务(模型)的部署、选择与请求路由决策紧密耦合,尤其在考虑用户移动性时,构成了服务质量和端到端延迟之间具有挑战性且根本性的权衡。现有移动边缘计算和数据密集型网络中相关问题的解决方案,由于对网络结构或用户移动性做出了限制性假设,未能充分应对这一挑战。为弥补这一不足,我们提出了一种分散式框架,联合优化人工智能服务的部署、选择与请求路由。在所提出的框架中,我们采用流量隧道技术来支持用户移动性,而无需进行成本高昂的人工智能服务迁移。为考虑非线性排队延迟,我们构建了一个非凸优化问题,以权衡服务质量和端到端延迟。我们推导了节点级的KKT条件,并开发了一种具有新型消息传递协议的分散式Frank-Wolfe算法。数值评估验证了所提方法的有效性,并显示出相较于现有方法的显著性能提升。