First, the talk discusses a quantum algorithm in the context of computational learning theory. Our Quantum Alphatron is introduced which is an adaptation of the classical version that is shown to provide a polynomial speedup for kernelized regression and the learning of two-layer neural networks while preserving theoretical guarantees. Subsequently, the talk discusses the learning of quantum states with graphical models. We consider learning states that are close to neural network quantum states, that are efficiently represented restricted Boltzmann machines (RBMs). To this end, we exhibit robustness results for efficient provable two-hop neighborhood learning algorithms for ferromagnetic and locally consistent RBMs. The results allow certain quantum states to be learned with a sample complexity exponentially better than naive tomography.