The present era of science can be said to have begun with the appearance of computers and large data sets. For more than a half century the tools of numerical analysis have required repeated honing in the attempt to keep up with the explosion of data and information generated in our technological world. The next step in evolution has been the rise of massive networks (e.g. the internet and world wide web) and this has led to a concomitant demand for new, fast algorithms for solutions of problems related to the page weight algorithm, webcrawlers, etc. Increasingly, large data sets are no longer restricted to classical scientific domains, but arise in virtually all fields, including finance, economics, social networks, law, and the humanities. What is now beginning to emerge is the next generation of numerical algorithms that can be used to sort, order, or otherwise extract knowledge in a wide variety of situations. As the information sciences expand and integrate with other disciplines, the need for these tools has become especially acute. All of the modern, numerical problems encountered have the common feature that they require scalable algorithms with robustness, i.e. good error estimates. The development of fast algorithms in the period 1980 -2000 has laid the groundwork for today’s challenges of numerical linear algebra, but new methods are now needed. This workshop will bring together researchers in various disciplines to discuss advances in the following topics:
Ming Gu
(University of California, Berkeley (UC Berkeley))
Piotr Indyk
(Massachusetts Institute of Technology)
Yann LeCun, Chair
(New York University)
Vladimir Rokhlin
(Yale University)
Sam Roweis
(University of Toronto)
Andrew Zisserman
(University of Oxford)