Outer-loop applications, such as optimization, control, uncertainty quantification, and inference, form a loop around a computational model and evaluate the model in each iteration of the loop at different inputs, parameter configurations, and coefficients.Using a high-fidelity model in each iteration of the loop guarantees high accuracies but often quickly exceeds available computational resources because evaluations of high-fidelity models typically are computationally expensive. Replacing the high-fidelity model with a low-cost, low-fidelity model can lead to significant speedups but introduces an approximation error that is often hard to quantify and control. We introduce multifidelity methods that combine, instead of replace, the high-fidelity model with low-fidelity models. The overall premise of our multifidelity methods is that low-fidelity models are leveraged for speedup while occasional recourse is made to the high-fidelity model to establish accuracy guarantees. The focus of this talk is the multifidelity Monte Carlo method that samples low- and high-fidelity models to accelerate the Monte Carlo estimation of statistics of the high-fidelity model outputs. Our analysis shows that the multifidelity Monte Carlo method is optimal in the sense that the mean-squared error of the multifidelity estimator is minimized for the available computational resources. We provide a convergence analysis, prove that adapting the low-fidelity models to the Monte Carlo sampling reduces the mean-squared error, and give an outlook to multifidelity rare event simulation. Our numerical examples demonstrate that multifidelity Monte Carlo estimation provides unbiased estimators ("accuracy guarantees") and achieves speedups of orders of magnitude compared to crude Monte Carlo estimation that uses a single model alone.
Collaboration with Karen Willcox and Max Gunzburger.