Core Concepts
The core message of this paper is to propose an optimal sampling policy and a low-complexity sub-optimal index-based policy to minimize the time-average expected uncertainty-of-information (UoI) in a remote monitoring system with random transmission delay.
Abstract
The paper studies a remote monitoring system where a receiver observes a remote binary Markov source and decides whether to sample and fetch the source's state over a randomly delayed channel. Due to transmission delay, the observation of the source is imperfect, resulting in uncertainty about the source's state at the receiver. The authors use UoI, measured by Shannon's entropy, as the metric to characterize the performance of the system.
The authors formulate a UoI-minimization problem under random delay, which can be modeled as a partially observed Markov decision process (POMDP). By introducing a belief state, the authors transform this process into a semi-Markov decision process (SMDP).
The authors first provide an optimal sampling policy employing a two-layered bisection relative value iteration (bisec-RVI) algorithm. Furthermore, they propose a sub-optimal index policy with low complexity based on the special properties of the belief state. Numerical simulations illustrate that both of the proposed sampling policies outperform two other benchmarks, and the performance of the sub-optimal policy approaches that of the optimal policy, particularly under large delay.
Stats
The paper does not contain any explicit numerical data or statistics to support the key arguments. The analysis is primarily theoretical, focusing on the formulation and solution of the optimization problem.
Quotes
The paper does not contain any striking quotes that support the key arguments.