A lot of theoretical legwork in quantum machine learning is spent on analysing parametrised quantum circuits trained by gradient descent methods. But are these really the fundamental models that we will run on future quantum computers, or are we firing our mathematical arsenal at the most obvious -- but ultimately useless -- design idea? In this talk I want to share preliminary results from two ongoing studies. The first one casts doubts on the much-celebrated performance of "quantum over classical" by conducting a systematic comparison of popular quantum models that, so far, produces astonishing outcomes. The second study asks how we can investigate core routines in quantum computing, such as Shor's algorithm, from the perspective of generalisation and hope to carve out a very different way of thinking about the intersection of quantum computing and machine learning.
Back to Workshop II: Mathematical Aspects of Quantum Learning