Cyber-Physical Systems (CPS) are increasingly incorporating what one can call Learning-Enabled Components (LEC) to implement complex functions. By LEC we mean a component (typically, but not exclusively, implemented in software) that is realized with the help of data-driven techniques, like machine learning. For example, an LEC in an autonomous car can implement a lane follower function such that one trains an appropriate convolutional neural network with a stream of images of the road as input and the observed actions of a human driver as output. The claim is that such LEC built via supervised learning is easier to implement than building a very complex, image processing driven control system that steers the car such that it stays on the road. In other words, if the straightforward design and engineering is too difficult, a neural network can do the job – after sufficient amount of training.
For high-consequence systems the challenge is to prove that the resulting system is safe: it does no harm, and it is live: it accomplishes its goals. Safety is perhaps the foremost problem in autonomous vehicles, especially for ones that operate in a less-regulated environment, like the road network. The traditional technology for proving the safety of systems is based on extensively documented but often informal arguments – that are very hard to apply to CPS with LEC.
The talk will focus on the results of our recently started project that aims at changing this paradigm by introducing (1) formal verification techniques whenever possible (including proving properties of the ‘learned’ component), (2) monitoring technology for assurance to indicate when the LEC is not performing well, and (3) formalizing the safety case argumentation process so that it can be dynamically evaluated. The application target is autonomous vehicles, with significant, but not exclusively used LECs. The goal is to construct an engineering process and a supporting toolchain that can be used for the systematic assurance of CPS with LECs.