We aim to introduce a deep learning framework for scientific data analysis that can seamlessly synthesize structured prior knowledge (e.g. differential equations, conservation laws, etc.) with data of variable fidelity and multiple modalities (e.g., model predictions at multiple scales/resolutions, images, time-series, scattered measurements, etc.). The setting we are interested in involves complex systems that are partially observed and whose dynamical behavior could be hard to model or is totally unknown. The inherent uncertainty associated with this setting necessitates a departure from the classical deterministic realm of modeling and scientific computation, and, consequently, our main building blocks can no longer be crisp deterministic numbers and governing laws, but instead we must operate with probabilistic models. In this talk we will see how knowledge of the underlying physics (e.g. conservation of mass, momentum, etc.) can enhance the efficiency and robustness of deep learning algorithms in small data regimes, and allow us to effectively synthesize numerical simulations, physical experiments and on-line databases of variable fidelity towards predicting complex dynamics from incomplete models and incomplete data. Leveraging recent advances in variational inference and adversarial learning we will also demonstrate how these predictive tools can be endowed with robust uncertainty estimates that can quantify model inadequacy and guide the judicious acquisition of new data.
Back to Workshop I: Big Data Meets Large-Scale Computing