This paper introduces AutoBS, a reinforcement learning (RL)-based framework for optimal base station (BS) deployment in 6G networks. AutoBS leverages the Proximal Policy Optimization (PPO) algorithm and fast, site-specific pathloss predictions from PMNet to efficiently learn deployment strategies that balance coverage and capacity. Numerical results demonstrate that AutoBS achieves 95% for a single BS, and 90% for multiple BSs, of the capacity provided by exhaustive search methods while reducing inference time from hours to milliseconds, making it highly suitable for real-time applications. AutoBS offers a scalable and automated solution for large-scale 6G networks, addressing the challenges of dynamic environments with minimal computational overhead.
翻译:暂无翻译