Numerical tensor techniques have become a well-established field in solving computational challenges in several areas of science and engineering. In parametric or stochastic settings, the solutions to PDE simulation and optimization problems can be represented as multi-dimensional arrays, with their different coordinate directions representing spatial dimensions, time, as well as the directions in parameter space. Due to the curse of dimensionality, such a solution representation becomes quickly too memory consuming. Therefore, low-rank tensor techniques have recently been established that can overcome this difficulty in all-at-once schemes that solve parametric or stochastic PDE problems in one go. These methods employ the often observed smoothness of the parameter-to-solution map. For their effectivity, it is necessary to implement the whole solver in a low-rank tensor format so that the full solution tensor never needs to be stored.
We will show how this general concept can be employed for solving optimal control and optimization problems for parametric and random PDEs, and if time permits, first ideas on using similar ideas for solving stochastic PDE eigenvalue problems and statistical inverse problems for PDE models will also be presented. Numerical examples illustrate the effectiveness of low-rank tensor techniques for these problem classes. We show that the resulting tensor equations with $10^8$ to $10^{15}$ unknowns can be solved without the use of HPC technology when employing low-rank representations, and we will discuss where one could get when efficient HPC implementations of the suggested concept would become available.