Much recent work in theoretical computer science, numerical linear algebra, machine learning, and statistical data analysis has exploited randomness as a computational resource to develop algorithms that provably approximate the solution to common matrix operations. Applications of such algorithms include: qualitatively faster algorithms for L2 and Lp regression; the
computation of coresets for common regression and optimization problems; improved feature and/or variable selection; faster and/or sparser computation of low-rank matrix approximations; speeding up kernel-based statistical
learning; and the improved scalability and interpretability of data analysis methods. We will describe recent algorithmic developments, with an emphasis on algorithms with provable quality-of-approximation bounds. We will also discuss recent applications of these ideas to the analysis of very large scientific and Internet data sets.
Audio (MP3 File, Podcast Ready)