The core message of this article is to propose a novel approach for locally stable convergence to Nash equilibrium in duopoly noncooperative games based on a distributed event-triggered control scheme.
The authors propose a variant of the Frank-Wolfe algorithm with sufficient exploration and recursive gradient estimation, which provably converges to the Nash equilibrium while attaining sublinear regret for each individual player in potential games and Markov potential games.
The core message of this paper is to introduce a convexification technique that allows for the characterization of equilibria in generalized Nash equilibrium problems (GNEPs) with non-convex strategy spaces and non-convex cost functions, including the important case of games with mixed-integer variables.
This paper proposes a novel numerical method called DG-SQP for solving local generalized Nash equilibria (GNE) of open-loop general-sum dynamic games with nonlinear dynamics and constraints. The method leverages sequential quadratic programming (SQP) and requires only the solution of a single convex quadratic program at each iteration.
The introduction of relative entropy regularization in general-sum N-agent linear-quadratic games confines the Nash equilibrium policies to the class of linear Gaussian policies. Furthermore, the uniqueness of the Nash equilibrium is guaranteed if the entropy regularization parameter is sufficiently large.