Increasingly large data sets are being ingested and produced by simulations. What experience from large-scale simulation is transferable to big data applications? Conversely, what new optimal algorithms will emerge that are motivated by data-intensive applications being pushed to large scales? How will they enrich traditional simulation? As long as the software stacks, production facilities, and even developer and user communities remain separate, many opportunities for mutual enhancement will be unrealized. This workshop will discuss:
- steering in high-dimensional parameter space
- smart data compression
- data-driven modeling (e.g., refinement of empirical functions through learning)
- physics-based “regularization” of analytics
- simulation as a source of training data
- learning to impute missing data
The workshop will bring together analysts and developers of computationally and data-intensive applications interested in early exploitation of extreme-scale computing platforms to define common ground and seek new opportunities.
The workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
Hans-Joachim Bungartz, Chair
(Technical University Munich (TUM))
Emmanuel Candes
(Stanford University)
Chris Johnson
(University of Utah)
David Keyes
(King Abdullah Univ. of Science and Technology (KAUST))
Marina Meila
(University of Washington)