The paper addresses the problem of optimizing task updating and offloading policies in mobile edge computing (MEC) systems to minimize the age of information (AoI). The key contributions are:
Formulation of the joint task updating and offloading problem as a semi-Markov game, capturing the asynchronous decision-making and variable transition times in real-time MEC systems.
Design of a fractional reinforcement learning (RL) framework for the single-agent case, which integrates RL with Dinkelbach's method to handle the fractional AoI objective. This framework is proven to have a linear convergence rate.
Extension of the fractional RL framework to the multi-agent setting, proposing a fractional multi-agent RL algorithm that guarantees convergence to the Nash equilibrium.
Development of an asynchronous fractional multi-agent deep RL algorithm that addresses the challenges of asynchronous decision-making and hybrid action spaces in semi-Markov games.
Experimental evaluation demonstrating that the proposed asynchronous fractional multi-agent DRL algorithm outperforms established benchmarks, reducing the average AoI by up to 52.6%.
The paper provides a comprehensive solution to the age-minimal task scheduling problem in MEC, tackling the key challenges of fractional objectives, multi-agent interactions, and asynchronous decision-making.
翻译成其他语言
从原文生成
arxiv.org
更深入的查询