Core Concepts
Developing a novel algorithm, MRPG, to achieve Nash equilibrium in cooperative-competitive multi-agent settings.
Abstract
1. Abstract:
Addressing RL among agents in teams with cooperation and competition.
Developing an RL method for Nash equilibrium using a linear-quadratic structure.
Introducing the mean-field setting to handle non-stationarity induced by multi-agent interactions.
2. Introduction:
MARL popularity for sequential decision-making.
Study of mixed Cooperative-Competitive team settings.
Structural specifications of linear dynamics and quadratic costs.
3. Setup & Equilibrium Characterization:
General-sum game among multiple teams analyzed.
Consideration of mean-field approximation within each team.
Formulation of LQ mean-field type game (MFTG).
4. Multi-player Receding-horizon NPG (MRPG):
Challenges in solving NE through data-driven approach discussed.
Establishment of linear convergence to NE using MRPG algorithm.
5. Numerical Analysis:
Simulation results for T=2, N=2, M=1000 agents per team.
Stats
"NE is then shown to be O(1/M)-NE for the finite population game where M is a lower bound on the number of agents in each team."
"Experiments illuminate the merits of this approach in practice."
Quotes
Is it possible to construct a data-driven method to achieve the Nash Equilibrium in CC Games?