We study the real-time remote tracking of a two-state Markov process by an energy harvesting source. The source decides whether to transmit over an unreliable channel based on the state. We formulate this scenario as a Markov decision process (MDP) to determine the optimal transmission policy that minimizes the average Version Innovation Age (VIA) as a performance metric. We demonstrate that the optimal transmission policy is threshold-based, determined by the battery level, source state, and VIA value. We numerically verify the analytical structure of the optimal policy and compare the performance of our proposed policy against two baseline policies across various system parameters, establishing the superior performance of our approach.
翻译:暂无翻译