Core Concepts
The author proposes a Multi-Agent Loss-Sharing (MALS) reinforcement learning model to optimize downlink and uplink transmissions for play-to-earn games, showcasing superiority over baseline models.
Abstract
The content discusses the optimization of play-to-earn games in augmented reality using mobile edge computing. It introduces an innovative approach, MALS, to enhance resource allocation and transmission efficiency for maximizing player earnings.
Play-to-earn games enable real-world profits through in-game tokens. Augmented reality (AR) play-to-earn games are compute-intensive, requiring offloading of graphics to edge servers. The proposed optimization problem aims to reduce latency and maximize earning potential while minimizing battery consumption.
The study compares the MALS model with other baseline models like IDA and CTDE. The MALS algorithm addresses asymmetric and asynchronous challenges effectively. By utilizing Proximal Policy Optimization (PPO), it enhances sample efficiency and policy stability in multi-agent reinforcement learning scenarios.
Through detailed experiments, the paper demonstrates the effectiveness of the MALS model in handling complex resource management tasks. It provides insights into optimizing joint objectives for improved gameplay experience and profitability.
Stats
"A 7-page short version containing partial results is accepted for the 2023 EAI GameNets"
"28 Feb 2024"