Deep learning systems have achieved remarkable performance in many tasks, but it is often notoriously hard to ensure that the resulting models obey hard constraints, as may often be required in many control applications. In this talk, I will present several recent works on enforcing different types of constraints within deep learning systems. In particular, I will highlight recent work on integrating general convex optimization problems as layers within deep networks, work on learning networks guaranteed to represent convex functions, and work on learning deep dynamical systems that enforce global stability of the nonlinear dynamics. In all cases, we highlight ways in which we can design the network structure to encode these implicit biases, in a manner that lets us easily enforce these hard constraints.
Back to Intersections between Control, Learning and Optimization