Efficiently solving real-world optimization problems remains a highly nontrivial challenge. In particular, how to deal with the objectives that are highly nonconvex and slow/expensive to evaluate? How to make use of previous solved instances to accelerate optimization of unseen ones? How to leverage the power of artificial intelligence and existing solvers? In this talk, we introduce two recent works. First, for nonconvex objectives with combinatorial constraints, we propose SurCo that learns a surrogate linear cost with a neural network so that the the learned cost can lead to the solution to the original objective. SurCo leverages previous instances and existing SoTA solvers and shows strong performance in real-world applications such as embedding table sharding and inverse photonics design. Second, for specific objectives such as frequency response of a linear PDE, which is often slow to evaluate, we propose CZP that leverages an analytical parametric form of the response, which can be learned with Transformers. CZP is fairly accurate in the application of antenna design whose objective is a nonlinear function of frequency response. With reinforcement learning, CZP can be used to find valid designs verifiable by commercial software.
Back to Artificial Intelligence and Discrete Optimization