Recently, we have shaken the balance between the information freshness, in terms of age of information (AoI), experienced by users and energy consumed by sensors, by appropriately activating sensors to update their current status in caching enabled Internet of Things (IoT) networks [1]. To solve this problem, we cast the corresponding status update procedure as a continuing Markov Decision Process (MDP) (i.e., without termination states), where the number of state-action pairs increases exponentially with respect to the number of considered sensors and users. Moreover, to circumvent the curse of dimensionality, we have established a methodology for designing deep reinforcement learning (DRL) algorithms to maximize (resp. minimize) the average reward (resp. cost), by integrating R-learning, a tabular reinforcement learning (RL) algorithm tailored for maximizing the long-term average reward, and traditional DRL algorithms, initially developed to optimize the discounted long-term cumulative reward rather the average one. In this technical report, we would present detailed discussions on the technical contributions of this methodology.
翻译:最近,我们通过适当激活传感器以更新其当前状态,即能够缓存的Things(IoT)网络网络网络([1]),在用户所经历的信息年龄(AoI)和传感器所消耗的能量方面,动摇了信息新鲜度之间的平衡。为了解决这个问题,我们将相应的更新状态程序作为持续进行的Markov决策程序(即不终止状态)(即,不终止状态),在这个进程中,国家行动对口的数量在考虑过的传感器和用户的数量方面成倍增长。此外,为避免对维度的诅咒,我们制定了一种设计深度强化学习(DRL)算法的方法,以最大限度地(最大限度地减少)平均奖励(成本)的方法,整合了R-学习、为尽量扩大长期平均奖励而设计的表格强化学习算法和传统的DRL算法,最初是为了优化折扣长期累积的奖励而不是平均的。我们将在这份技术报告中详细讨论这一方法的技术贡献。