Traditional inverse problem solvers in imaging minimize a
cost function consisting of a data-fit term, which measures how well
an image matches the observations, and a regularizer, which reflects
prior knowledge and promotes images with desirable properties like
smoothness. Recent advances in machine learning and image processing
have illustrated that it is often possible to learn a regularizer from
training data that can outperform more traditional regularizers. In
this talk, I will describe various classes of approaches to learned
regularization, ranging from generative models to unrolled
optimization perspectives, and explore their relative merits and tradeoffs.