PrimaVera Project
No more train delays, power outages, or failure of production machines? The PrimaVera project, funded by the Dutch National Research Agenda (NWA), represents a major step towards this goal. With predictive maintenance, or just-in-time maintenance (maintenance just before a system breaks down), the reliability of infrastructure and production resources can be increased and the costs of maintenance can be reduced.
Existing predictive maintenance techniques only work for small-scale systems and are difficult to scale up. Choices made in one place in the chain have an important influence on other processes in the chain. The choice of a certain type of sensors and measurements influences the type of predictions that can be made, and therefore also the quality of the predictions. That is why cross-level optimization methods are being developed within PrimaVera.
2025 |
Zhao Kang: Robust Spare Parts Inventory Management. 2025, ISBN: 978-94-6510-653-3. (Type: PhD Thesis | Links | BibTeX)@phdthesis{Kang2025, |
Luc Stefan Keizers: Hybrid Prognostics For Predictive Maintenance. Combining Physics-Based And Data-Driven Methods To Overcome Prognostic Challenges. 2025, ISBN: 978-90-365-6574-5. (Type: PhD Thesis | Abstract | Links | BibTeX)@phdthesis{Keizers2025,Predictive maintenance is a growing research field, aiming to perform maintenance only when it is required. Prognostic algorithms are essential to achieve this, as predictions of upcoming failures help to increase availability of systems, utilize the full life time of equipment and improve maintenance logistics. Since sensors are getting cheaper and data storage and processing have become cheaper and more efficient during the fourth industrial revolution, there is a lot of interest in data-driven prognostic algorithms. However, high data requirements limit applicability is many practical applications. Physics-based prognostic models yield quantitative relations between system usage and degradation independent from historical data, but the development of such physics-of-failure models is complex and expensive. Because both data-driven and physics-based models have their advantages and limitations, combinations of both types of methods have the potential to get rid of the limitations and profit from the benefits. When both loads and the condition of a component can be monitored, physics-of-failure models can be updated in real-time using Bayesian filtering algorithms. This results in updated quantitative relations between loads and degradation, calibrated for a specific component. The first part of this thesis describes how such methods can be applied for components in variable operating conditions and implements it in a generic prognostic framework. These Bayesian filters preferably receive a direct measure of degradation, such as crack length or the amount of removed material. In many practical applications it is only possible to measure indirect consequences of degradation, such as increased vibration levels, elevated temperatures or acoustic emissions. Therefore, the second part of this thesis focuses on improving quantitative diagnostics to act as input for prognostic algorithms. Because of their modularity and applicability in multiple physical domains, bond graphs are proposed to simulate faults to enhance quantitative diagnostics. A combined diagnostic and prognostics framework is developed which is suitable for prognostics under varying operating conditions, when only limited historical run-to-failure data are available. One of the biggest challenges remains validation of the methods on a real-world case study, as the lack of real-world data is one of the biggest challenges from which this research partly originated. |
Bas van Oudenhoven: Human-Centric Predictive Maintenance. 2025, ISBN: 978-90-386-6315-9. (Type: PhD Thesis | Links | BibTeX)@phdthesis{vanOudenhoven2025, |
Roel Bouman: Rethinking Anomaly Detection: From Theory to Practice. 2025, ISBN: 9789465150765. (Type: PhD Thesis | Abstract | Links | BibTeX)@phdthesis{Bouman2025,The field of anomaly detection is rapidly evolving with numerous algorithms and applications. However, the fundamental principles behind anomaly detection are less frequently studied. This thesis aims to evaluate algorithm performance, assess the reliability of autoencoders, and introduce a novel application in power grid load estimation. A comprehensive comparative study on real-world data identifies that only a few algorithms are necessary for effective anomaly detection. Specifically, k-nearest neighbors and extended isolation forest perform well in detecting common anomalies. Autoencoders are increasingly used for anomaly detection, especially in computer vision due to their feature extraction capabilities. However, we analyze the underlying assumptions of autoencoder-based anomaly detection and conclude that these assumptions do not hold in practice, making autoencoders unreliable for this purpose. Applying anomaly detection to the energy sector, we utilize statistical process control and binary segmentation to identify measurement errors and power rerouting in load estimation. By filtering these anomalies, we improve the reliability of power grid load estimates in the Netherlands. Enhancing load estimation through interpretable methods helps reduce unused grid capacity, providing necessary flexibility for the ongoing energy transition. |
Thom Badings: Robust Verification of Stochastic Systems. 2025, ISSN: 2950-2772. (Type: PhD Thesis | Abstract | Links | BibTeX)@phdthesis{Badings2025,Verifying that systems are safe and reliable is crucial in today’s world. For example, we want to prove that an autonomous drone will safely reach its target, or that a manufacturing system will not break down. Classical algorithms for verifying such properties often rely on a precise mathematical model of the system, for example, in the form of a Markov chain or a Markov decision process (MDP). Such Markov models are probabilistic transition systems and are ubiquitous in many areas, including control theory, artificial intelligence (AI), formal methods, and operations research. However, as systems become increasingly complex with more cyber-physical and AI components, uncertainty about the system’s behavior is inevitable. As a result, transition probabilities in Markov models are subject to uncertainty, rendering many existing analysis algorithms inapplicable. In this thesis, we fill this gap by developing novel verification methods for Markov models in the presence of uncertainty. To capture this uncertainty, we use parametric Markov models, where probabilities are described as functions over parameters. We study two perspectives on rendering verification methods robust against uncertainty: (1) set-bounded uncertainty, where only the set of possible parameter values is known and one can be robust in a worst-case sense, and (2) stochastic uncertainty, where the parameter values are described by a probability distribution and one can be robust in a probabilistic sense. By combining techniques from formal methods, AI, and control theory, our contributions span the following general problem settings: 1. We develop robust abstraction techniques for solving control problems for Markov models with continuous state and action spaces, and with set-bounded uncertain parameters. Based on the notion of probabilistic simulation relations, we show that such continuous control problems can be solved by only analyzing a finite-state abstraction, formalized as an MDP with sets of transition probabilities. 2. We present novel and scalable verification techniques for parametric Markov models with a prior distribution over the parameters. Our approaches are sampling-based, do not require any assumptions on the parameter distribution, and provide probably approximately correct (PAC) guarantees on the verification results. 3. We show that parametric models can be used to improve the sample efficiency of data-driven learning methods. We leverage tools from convex optimization to perform a sensitivity analysis, where we measure sensitivity in terms of partial derivatives of the polynomial function that describes a measure of interest. 4. We study continuous-time Markov chains where the initial state must be inferred from a set of (possibly uncertain) state observations. This setting is particularly relevant in runtime monitoring. We compute upper and lower bounds on reachability probabilities, conditioned on these observations. Our approach is based on a robust abstraction into an MDP with intervals of transition probabilities. In conclusion, we develop verification methods that reason over uncertainty without sacrificing guarantees. While dealing with uncertainty can be computationally expensive, providing such guarantees is crucial for designing safe and reliable systems with cyber-physical and AI components. The aim of this thesis is to contribute to a better understanding of dealing with uncertainty in stochastic verification problems. |
