Center for Algorithms and Theory of Computation

CS 269S, Fall 2021: Theory Seminar


October 22, 2021, 1:00 – 1:50pm: DBH 1427

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

Will Overman

Abstract:

Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? In this talk we will present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. In our main technical result, we prove fast convergence of independent policy gradient to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.