Motivated by applications in Optimization, Game Theory, and the training of Generative Adversarial Networks, the convergence properties of first order methods in both unconstrained and constrained min-max problems have received extensive study. It has been recognized that they may cycle, but there is no good understanding of their limit points when they do not. When they converge, do they converge to local min-max solutions? In this talk we will provide a characterization of the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Mirror Descent (OMD) for unconstrained min-max optimization. Finally we will discuss recent progress towards the convergence properties of OMD for the case of 2-player zero-sum games (instance of constrained min-max optimization).
Joint work with Costis Daskalakis