We discuss an algorithm for solving state-dependent Hamilton-Jacobi partial differential equations arising from optimal control and differential game problems, both in the traditional setting and in the mean field.
We go between indirect methods (Pontryagin's maximum principle) and direct methods (optimization over the spaces of curves),
and formulate a Hopf-type maximization principle for solving the HJ PDE. We show the validity of the formula under restricted assumption, conjecture its validity in a more general setting, e.g. with non-convex Hamiltonian.
Our method preserves an optimization structure for convergence guarantee as well as minimizes the number of variables in the optimization process by using partial knowledge of the optimality condition. A convergence certificate for checking local optimality of our solution is proposed. From a PDE point of view, the optimization problems are independent of each other and they can be implemented in a parallel fashion with nice scaling.
Numerical illustrations of various non-convex state-dependent Hamilton-Jacobi partial differential equations in the traditional setting and in the mean field are shown.