The ultimate promise of robotics is to design devices that can physically interact with the world. To date, robots have been primarily deployed in highly structured and predictable environments. However, we envision the next generation of robots (ranging from self-driving and -flying vehicles to robot assistants) to operate in unpredictable and generally unknown environments alongside humans. This challenges current robot algorithms, which have been largely based on a-priori knowledge about the system and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been mostly limited to learning single tasks, and demonstrated in simulation or lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety when placed in a closed-loop system architecture. It will also require to answer the fundamental question of how to design learning architectures for dynamic and interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.