Neural network verification as piecewise linear optimization

Joseph Huchette
Rice University

Neural networks are incredibly powerful tools for prediction in important domains such as image classification and machine translation. However, it has been observed that neural networks are often susceptible to imperceptible "attacks" that can lead to unexpected behavior, posing a substantial risk in the case that such models are deployed in a mission-critical capacity. Neural network verification aims to prove (or disprove) that a trained model is robust against such attacks. This talk focuses on framing neural network verification as piecewise linear optimization problems which can be modeled and solved using linear programming (LP) and mixed-integer programming (MIP). First, we present a framework for producing the strongest possible LP and MIP formulations for individual neurons with convex piecewise linear activations. We apply this framework to trained ReLU networks and show that our methods can solve verification tasks more quickly, and with a lower false negative rate, than comparable approaches. Second, we study verification of binarized neural networks, where we observe that the discontinuous activations are much less hospitable to LP-based methods. Nonetheless, we derive cutting planes that can reduce MIP solve time and search tree size by enforcing consistency inside the tree.

The work presented in this talk is joint with Ross Anderson, Bochuan Lyu, Will Ma, Krunal Patel, Christian Tjandraatmadja, and Juan Pablo Vielma.

Presentation (PDF File)
View on Youtube

Back to Deep Learning and Combinatorial Optimization