The task of computing a low rank approximation to a given matrix is ubiquitous both in data analysis and in scientific computing. In particular, it is a core component of many algorithms for data compression. This task has for a long time been challenging to implement at scale since traditional algorithms such as Krylov methods or column pivoted QR build the factorization one vector at a time, with an access to the whole matrix in each step. The talk will survey recent work that utilizes randomized projections to dramatically reduce the number of times the matrix needs to be accessed. We will also describe recent work on how randomization can be used to accelerate the computation of full (as opposed to partial) matrix factorizations, and the compression of rank structured matrices.
Back to Workshop I: Big Data Meets Large-Scale Computing