Derivative-Free Optimization (DFO) (also known as black box optimization) is the area of optimization that deals with functions whose explicit form is unknown. It is assumed that the function values can be computed (approximately, usually at high cost) but that derivative information is not available. We will overview some of the recent advances in this area and will show how classical gradient-type and second-order optimization methods are adapted to this setting. We will also highlight some of the applications of DFO methods in reinforcement learning.
Back to Workshop I: From Passive to Active: Generative and Reinforcement Learning with Physics