Approximately convex optimization is a fundamental problem in robust optimization, in which the objective function is only guaranteed to be close to an unknown convex function. It provides a foundation for understanding non-convex optimization in general. In this talk, I will first provide a brief introduction to quantum walks. Then, I will discuss how to use an efficient quantum sampling procedure to achieve polynomial quantum speedup in optimizing approximately convex functions. As an application, we will present a quantum algorithm for zeroth-order stochastic convex bandits, which exponentially reduces the dependence of regret on the number of rounds compared to the classical lower bound.