Computational and simulation-based science and engineering are undergoing a revolution due to the advent of artificial intelligence and machine learning techniques. The combination of simulation based approaches with machine learning gave rise to the new field of simulation intelligence. Reinforcement learning, in addition, is revolutionising traditional control and automation approaches and game and decision theory.
In this rapidly evolving area, we work at the intersection between formal methods, control theory, cyber-physical systems, and artificial intelligence. We focus on autonomous Cyber-Physical Systems and leverage formal specification languages (Signal Temporal Logic) to either design them to be safe (Safe Reinforcement Learning) or to preemptively monitor their safety (Predictive Monitoring). We further explore surrogate models for discrete stochastic systems based on generative deep learning. As complexity increases, we move towards probabilistic reasoning to handle uncertainty and intrinsic stochasticity. We also explore the application of reinforcement learning to combinatorial optimisation problems.
Keywords: parametric verification, predictive monitoring, learning model abstractions, multi-agent Reinforcement Learning, Reinforcement Learning with memory, analysis of multimodal stock systems, networks, mean-field approximation of stochastic models.
In the following, there is a reasoned bibliography, in which each class of contributions is briefly described and the relevant bibliography is cited.
Smoothed Model Checking
Smoothed Model Checking is a parametric verification engine based on simulation and machine learning, for stochastic processes like Continuous Time Markov Chains. It leverages the power of Bayesian ML to infer the satisfaction probability as a function of model parameters and estimate the uncertainty of the predictions, that can be further used in an active learning refinement cycle. It has been used for parameter synthesis. Originally implemented using Gaussian Processes, it has been recently scaled up using stochastic variational inference.
- Bortolussi, Luca; Milios, Dimitrios; Sanguinetti, Guido. Smoothed model checking for uncertain continuous-time Markov chains. In: Information and Computation, vol. 247, pp. 235–253, 2016. Link
- Bortolussi, Luca; Silvetti, Simone. Bayesian statistical parameter synthesis for linear temporal properties of stochastic models. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 396–413, Springer, Cham 2018. Link
- Bortolussi, Luca; Cairoli, Francesca; Carbone, Ginevra; Pulcini, Paolo Scalable Stochastic Parametric Verification with Stochastic Variational Smoothed Model Checking. In: arXiv preprint arXiv:2205.05398, 2022. Link
Neural Predictive Monitoring
Neural Predictive Monitoring (NPM) focuses on efficient methods to predictive monitoring (PM), the problem of detecting at runtime future violations of a given requirement from the current state of a system. NPM tackles the scalability issue of performing model checking at runtime, which would offer a precise solution but is generally computationally expensive. NPM learns an approximate yet efficient surrogate (deep learning) model of the expensive model checker. A key challenge remains to ensure reliable predictions, especially in safety-critical applications. We apply conformal prediction (CP) for rigorous uncertainty quantification that offers statistical guarantees regarding the generalization error of the learning model.
- Bortolussi, L., Cairoli, F., Paoletti, N., Smolka, S. A., & Stoller, S. D. (2021). Neural predictive monitoring and a comparison of frequentist and Bayesian approaches. International Journal on Software Tools for Technology Transfer, 23(4), 615-640. Link
- Cairoli, F., Paoletti, N., Bortolussi, L. (2023, May). Conformal quantitative predictive monitoring of STL requirements for stochastic processes. In Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control (pp. 1-11). Link
- Cairoli, F., Bortolussi, L., & Paoletti, N. (2023, October). Learning-based approaches to predictive monitoring with conformal statistical guarantees. In International Conference on Runtime Verification (pp. 461-487). Cham: Springer Nature Switzerland. Link
Safe Reinforcement Learning
We are currently developing learning-based approaches to synthesize safe and robust controllers for autonomous cyber-physical systems and, at the same time, to generate challenging tests or scenarios. This procedure combines formal methods for model verification with generative models
- Bortolussi, L., Cairoli, F., Carbone, G., Franchina, F., & Regolin, E. (2021). Adversarial learning of robust and safe controllers for cyber-physical systems. IFAC-PapersOnLine, 54(5), 223-228. Link
Surrogate Stochastic Models
Stochastic processes are often analysed by simulation which can be costly for large or stiff systems, particularly when a massive number of simulations has to be performed, e.g. in a multi-scale model. A strategy to reduce computational load is to abstract the population model, replacing it with a simpler stochastic model, faster to simulate. Here we pursue this idea, exploring and comparing state-of-the-art generative models, which are flexible enough to automatically learn distributions over entire trajectories, rather than single simulation steps, from observed realizations of the system. We also propose a method for efficient conditional sampling from such surrogate models, enforcing logical and consistency constraints on generated samples in a soft fashion.
- Cairoli, F., Anselmi, F., d’Onofrio, A., & Bortolussi, L. (2023). Generative abstraction of Markov population processes. Theoretical Computer Science, 977, 114169. Link
- Bortolussi, L., Cairoli, F., Giacomarra, F., Scassola, D. (2023, September). Model Abstraction and Conditional Sampling with Score-Based Diffusion Models. In International Conference on Quantitative Evaluation of Systems (pp. 307-310). Cham: Springer Nature Switzerland. Link
Reinforcement Learning for Combinatorial Optimization
Combinatorial Optimization (CO) problems arise in many real-world applications. While general-purpose optimizers exist, oftentimes solving relevant CO problems requires a significant research effort in designing performant solvers. In recent years, the flexibility of Reinforcement Learning (RL) has been leveraged to learn effective heuristics for CO problems and even surpass established methods. Our research is focused on applying RL to different CO problems and how to improve the search capabilities of learned solvers to generalize beyond problem instances observed at training time. Other topics of interest are the integration of data-driven RL techniques with exact solvers to boost their performance.