We will discuss convergence of viscosity solutions of Hamilton-Jacobi-Bellman (HJB) equations corresponding either to deterministic optimal control problems for systems of $n$ particles or to stochastic optimal control problems for systems of $n$ particles with a common noise converge to the viscosity solution of a limiting HJB equation in the space of probability measures. The limiting HJB equation is interpreted in its ``lifted" form in a Hilbert space, which has a unique viscosity solution. The main difficulty is in proving uniform continuity estimates for viscosity solutions of the approximating problems which may be of independent interest. When the Hamiltonian is convex in the gradient variable and equations are of first order, it can be shown that the viscosity solutions of the finite dimensional problems converge to the value function of a variational problem in $\mathcal{P}_2(\R^d)$ thus providing a representation formula for the solution of the limiting first order HJB equation. This is a joint work with W. Gangbo and S. Mayorga.
Back to Workshop III: Mean Field Games and Applications