We conduct a regular (virtual) colloquium with interesting talks around predictive maintenance. You can join the colloquium via the Microsoft Teams links below. In case of any questions, please contact Nils Jansen or Anna Hermelink.
Title: The effect of multi-sensor data on condition-based maintenance policies
Date: 3rd November 2021
Time: 16:00 CET
Industry 4.0 promises reductions in maintenance costs through access to digital technologies such as the Internet of Things, cloud computing and data analytics. Many of the promised benefits to maintenance are, however, dependent on the quality of the data obtained through sensors and related technologies. In this work, we consider the effect of access to different levels of deterioration data quality, resulting in partial information about the underlying state of the system being monitored, by means of sensors, on condition-based maintenance policies. The sensors may be either internal company sensors, or more informative external sensors of which access is obtained at a cost. We analyze the structure of the optimal policy, where the actions are either to perform maintenance, to pay for external sensor information or to continue system operation with internal sensor information only. We show that the optimal policy consists of at most four regions based on the believed deterioration state of the system. The analysis allows us to numerically investigate the decision maker’s willingness to pay for more informative external sensor information with respect to the
level of external sensor informativeness, when compared to that of the internal sensor, and the cost thereof.
Title:An introduction to behavioral operations management
Time: 16:00 CET
Human beings are critical to the functioning and performance of the majority of operating systems. However, human behavior traditionally has been ignored in the field of operation management (OM). More specifically, most models in OM assume that agents who participate in operating processes are either fully rational or can be induced to behave rationally. That is, these models assume that people have stable preferences, are not affected by cognitive biases or emotions, and have the ability to disregard irrelevant information by only responding to relevant information when making decisions. The field of Behavioral Operations Management (BOM) departs from these idealized assumptions by acknowledging that human decision-makers are guided by emotions, cognitive biases or irrelevant situational cues when making decisions. The goal of this talk is to introduce this field by discussing the results of a research project that we recently initiated in the field of sales forecasting.
Applying data-driven techniques to analyse industrial use cases — in theory it works in practice
Speaker: dr. Jeroen Linssen
At the research group Ambient Intelligence, we perform applied research: based on real-life use cases we collaborate with external partners to work towards a solution and gain insights from a technical and domain perspective. Surely, many people would agree that such projects do not traverse a paved road. In this talk, I give a few examples of projects we carried out to analyse data with various partners in my research line ‘applied data science’. Besides highlighting some techniques we used and adopted, I also emphasize the methodological lessons we learned thus far as well as the insights we would like to achieve in our current and future projects.
Applying Causal Analytics for ASML Diagnostics: Results and Challenges
Speaker: Errol Zalmijn
Semiconductor lithography system issues are challenging to diagnose from predefined models and historic data alone. That’s because such systems are characterized by high-dimensionality, non-stationarity and nonlinear behavior across multiple time and spatial scales. One line of research at ASML is to investigate model-independent approaches, which can help to find previously unknown causal relations from system diagnostic data. Therefore, we combine information theory based transfer entropy with eigencentrality in time series analysis. However, this approach must be computationally efficient for ASML system diagnostics (and prognostics) in real time. Moreover, it must be able to locate unobserved variables and capture unique as well as joint causal influences.
This presentation will discuss the merits and challenges of causal inference using information-theoretical concepts within ASML context and demonstrate a number of cases.
Smart use of sensor data leads to modern maintenance support in future ships
Speaker: Bart Pollmann & Dr. Wieger Tiddens
The Royal Netherlands Navy is planning to introduce several new ship classes within the next decade. The new ships will be technologically much more advanced than the ships currently in service. At the same time the operational availability needs to increase and the crew size needs to get smaller. This combination leads to the necessity not only to automate many functions on board, but also to increase the level of support to the maintenance organization. Sensor data can be used to better predict failures of machinery in order to allow the maintenance organization to perform timely repair actions and prevent catastrophic failures. The developments include the reuse of available data used for Monitoring & Control purposes, the introduction of extra sensor technology to better detect failure modes that cannot be detected with current systems, and the introduction of AI and Machine Learning. The Royal Netherlands Navy is cooperating with Industry to ensure the timely availability of these new methods and technologies.
Probabilistic analyses and its application to life predictions of aircraft structures
Speaker: PDEng Frank Grooteman (12.05.2021)
Many aircraft are designed according to the deterministic damage tolerance philosophy to predict the crack growth life of the structure. Alternatively, a probabilistic damage tolerance analysis can be performed, called a structural risk analysis (SRA), taking into account all important scatter sources, such as, the initial flaw size, the inspection quality, the inspection scheme, the variability in loads and crack growth material properties, instead of using scatter and safety factors. For new military aircraft, SRA is mandatory. For current military aircraft, it already has become a valuable tool for fleet management, since it offers a risk (probability of failure) development over time, which cannot be obtained from the traditional deterministic damage tolerance analysis. By this, it better signals fleet management when to take corrective (maintenance) actions to prevent (critical) failures of the aircraft.
This presentation will address the general concept of probabilistic analyses and its more specific implementation in the field of probabilistic fracture mechanics (SRA) using available fleet data. The approach will be supported by a number of examples.
Next Generation Prediction Methodologies and Tools for Engineering Risk Assessment
Speaker: Prof. John Andrews (07.04.2021)
Risk Assessments performed on systems across many industrial sectors employ techniques such as Fault Tree Analysis and Event tree analysis which have their foundations back in the 1960/1970s. Since that time technology has advanced and system designs, their operating practice and maintenance strategies are now significantly different to those of the 1970s. Some of the restrictive assumptions such as: constant failure and repair rates for components, component failures being independent and the limited account of maintenance and renewal options in the component failure models employed, reduce the effectiveness of these methodologies to represent modern day systems.
In addition, research into the risk prediction techniques has made considerable advances in their capabilities since the 1970s but these advances tend to have addressed each deficiency in isolation. Examples of significant advances are the Binary Decision Diagram (BDD) method of solving Fault Tree structures, improving both accuracy and efficiency, and the Petri net method which has been proven to be an effective means of predict system performance when complex maintenance and renewal strategies are employed.
This presentation will describe the motivation, progress and methodologies used on a project to develop the next generation of risk assessment methods which is funded by Lloyd’s Register Foundation. The project aims to update the current risk assessment capabilities using a hybrid approach of methods including BDDs and Petri nets. This research addresses the deficiencies in the current approaches and extends their capabilities to better represent systems employed across the industries. The approaches developed will use the familiar causality structures of fault tree and event tree analysis, removing their traditional assumptions by changing the analysis methodologies employed.
Industrial partners from the nuclear, aerospace and railway industries are collaborating on the project to ensure it meets their requirements.
The “RUL loss rate”, or time derivative or the RUL (remaining useful life), measures the speed at which an asset’s condition degrades and it therefore is getting closer to failure in the absence of any preventive maintenance action.
The average RUL loss rate is the derivative of the “mean residual life” (MRL) ; therefore understanding the latter’s properties is of potential interest for maintenance policy optimization.
First we study a special class of time-to-failure distributions: those for which the MRL is a linear function of time, i.e. the average RUL loss rate is constant. This class contains very well known special cases: the exponential distribution, for which the MRL is constant (equal to the MTTF), and therefore the RUL loss rate is equal to 0; at, at the other extreme, what we call the Dirac distribution, for which the asset’s lifetime is deterministic, and the loss rate is equal to 1: for every hour that passes, the remaining useful life decreases by one hour.
In general, for that family of distributions, which we characterize explicitly, the average RUL loss rate takes a value between 0 and 1. For instance, the uniform distribution is characterized by an average RUL loss rate of one half.
It is shown that, for that special family, the average RUL loss rate can be obtained explicitly as a (decreasing) function of the coefficient of variation of the time to failure. A closed-form expression for the confidence interval for the RUL is obtained.
Then those results are generalised in two directions:
◼ 1) by introducing a nonlinear time transformation that allows for transposing to some classes of time-to-failure distributions ( such as Weibull or gamma) the results obtained in the special case.
◼ 2) by considering concurrent degradation modes; for instance, the combination of a mode with constant failure rate (exponential distribution) and a mode with deterministic life time ( Dirac distribution);
Implications for maintenance policy are discussed.
Domain adaptation and hybrid algorithms for intelligent maintenance systems
Speaker: Prof. dr. Olga Fink (03.02.2021)
The amount of measured and collected condition monitoring data for complex industrial assets has been recently increasing significantly due to falling costs, improved technology, and increased reliability of sensors and data transmission. However, faults in safety critical systems are rare. The diversity of the fault types and operating conditions makes it often impossible to extract and learn the fault patterns of all the possible fault types affecting a system. Consequently, faulty conditions cannot be used to learn patterns from. Even collecting a representative dataset with all possible operating conditions can be a challenging task since the systems experience a high variability of operating conditions. Therefore, training samples captured over limited time periods may not be representative for the entire operating profile. The collection of a representative dataset may delay the implementation of data-driven fault detection and isolation systems. Furthermore, domain experts require an interpretability of the obtained results.
The talk will give some insights into potential solutions that enable to 1) transfer models and operational experience between different units of a fleet and between different operating conditions also in unsupervised setups where data on faulty conditions is not available; and 2) fuse physical performance models and deep neural networks, thereby not only improving the performance but also improving the interpretability of the developed models.
SPC (Statistical Process Control) is the part of industrial statistics that deals with monitoring data streams in order to timely detect changes (the word control in the name is a historical misnomer because it leads to confusion with feedback control; a better name should have been Statistical Process Monitoring). The field originated in the manufacturing industry.
In this talk, we give a brief overview of the standard procedures in SPC, applications fields and position SPC in the wider data science context. We will discuss an industrial case study with wind turbines, that illustrates the methodological challenges in the field.
The wind turbine case study arose in a maintenance context. We will discuss benefits of including SPC in predictive maintenance strategies and highlight some methodological challenges and how they could fit within the PrimaVera context.
Maintenance Optimization for Multi-Component Systems with a Single Sensor
Speaker: Dr. Ayse Sena Eruguz (09.12.2020)
We consider a multi-component system in which a condition parameter is monitored by a single sensor. Monitoring gives the decision maker some information about the system state, but it does not reveal the exact state of the components. The decision maker infers a belief about the exact state from the current condition signal and the past data, in order to decide when to intervene for maintenance. A maintenance intervention consists of a complete and perfect inspection followed by component replacement decisions. We model this problem as a partially observable Markov decision process. We consider a deterioration process that suitably reflects the deterioration characteristics of a multi-component system and a probabilistic relation between system states and condition signals. Under reasonable conditions, we investigate the structure of the optimal maintenance intervention policy.
Think about all the times you had to endure train delay because of an expected switch failure, flight cancellation due to malfunction in air traffic control, blocked or reduced speeds on road due to unexpected, unannounced repair of roads/bridges. As a user, we expect the transport network to be efficient, reliable and available. At the same time, transportation agencies are facing competing demands to optimally spend the limited budget and manage aging infrastructure. In this talk, I will highlight the research efforts that addresses the challenges of infrastructure asset management by developing, implementing and validating the data-driven decision support methods. Specifically, I will present the case of automated damage detection of bridges using visual data and predicting maintenance need of railway switches using data from in-use business processes.
A fundamental problem for the further adoption of ’smart’ maintenance in service triads is that costs and benefits are dislocated in time and place. When maintenance is done with IoT enabled condition-based maintenance, in so-called CBM-driven smart services, uptimes and revenues will increase for the asset owner. However, the revenues for the other stakeholders may decrease. OEMs may provide fewer spare parts and service providers may sell fewer direct maintenance hours. And so, what is obviously beneficial for the service triad as a whole does not happen because of this misalignment. Contractual arrangements are needed that overcome these misalignments, so-called performance-based contracts. This calls for an integrated perspective on costs and benefits of smart maintenance over time. Constructing formal business models of these dynamics with all the relevant stakeholders is a proven method to develop such an integrated perspective and the associated performance-based contracts. Moreover, these dynamics change over time. The challenges and opportunities in the early stages of service growth are different from those later on. In the talk, all these business model dynamics will be discussed, based on recent work with OEM eager to develop CBM-driven smart services.
Active automata learning is emerging as an effective technique for obtaining state machine models of software and hardware systems. In this talk, I will present an overview of work in my group in which we used automata learning to find standard violations and security vulnerabilities in implementations of network protocols such as TCP, TLS, and SSH. Also, I will discuss the application of automata learning to support refactoring of legacy embedded control software, and the theoretical challenges that we face to further scale the application of automata learning techniques.
Bram de Jonge is assistant professor within the department of Operations of the University of Groningen. He holds an MSc degree in Econometrics, Operations Research & Actuarial Studies (cum laude) and a PhD degree, both from the University of Groningen. His main research area is maintenance planning and optimization.
Industrial systems are in general subject to deterioration, ultimately leading to failure, and therefore require maintenance. Due to increasing possibilities to monitor, store, and analyze conditions, condition-based maintenance policies are gaining popularity. The most detailed approach for modeling condition parameters is by using continuous-time continuous-state stochastic processes. However, the resulting analysis can be quite difficult, and therefore simulation is often used. We describe an approach for discretizing continuous-time continuous-state non-decreasing deterioration processes, resulting in discrete-time Markov chains. Furthermore, we show how standard matrix algebra can be used to optimize condition-based maintenance policies, taking into account a required planning time for carrying out maintenance.
Part I: Asset Management & PdM, part II: Vibration analysis, part III: Structure of the deliverable
Speaker: Dr. Alieh Alipour (02.09.2020)
Alieh Alipour is the lecturer of Asset Management at the Department of Mechanical Engineering and project leader of PrimaVera at the Smart Sensor Systems Lectoraat of The Hague University of Applied Science. Prior to that she was the advisor in Asset management group at Arcadis and innovator in Structural Reliability Department at TNO. Alieh Obtained her PhD degree in Civil Engineering from TU Delft (2011-2017).
In this presentation, I will first present the important aspects of Asset Management and how predictive maintenance strategies will influences these aspects. Then I will present the steps required for vibration analysis of the bearings in rotary machines (based on Mobius Institute books). In the end, I will briefly explain our structure for the deliverable document for PrimaVera.
Stella Kapodistria is assistant professor in the Department of Mathematics and Computer Science, where she is part of the Stochastics operations section. She participates in the Networks Gravitation (NWO-Zwaartekracht) project, an NWO funded initiative to build self-organizing and intelligent networks by using algorithms and stochastics. She is also part of the DeSIRE research program on resilient engineering funded by the 4TU-call “High Tech for a Sustainable Future” and she is in the think-tank of the newly established 4TU center on Resilience Engineering. She is a co-applicant in the PrimaVera (NWA-ORC) project and the Real-time data-driven maintenance logistics (NWO – “Big Data: real time ICT for logistics”) project. As her main aim in her scientific work Stella states “revolutionizing our thinking and methods to solve today’s problems”. You can check out Stella’s scientific work here <https://www.tue.nl/en/news/features/making-complex-decisions-with-the-help-of-ai/> and here <https://research.tue.nl/en/persons/stella-kapodistria>.
With the growth of ICT and the real-time flow of data, it becomes increasingly important to be able to measure complex systems, quantify their level of health and resilience, and be able to detect changes with great speed and precision. This would inherently facilitate the transition from curative, reactive actions to preventive, prescriptive actions. In this talk, I will demonstrate through several examples how data analytic techniques can be used to transform data into knowledge and actions.
Tom Heskes, Professor of Artificial Intelligence, Institute for Computing and Information Sciences (iCIS), Radboud University Nijmegen
Discovering causal relations from data lies at the heart of most scientific research today. In apparent contradiction with the adagio “correlation does not imply causation”, recent theoretical insights indicate that such causal knowledge can also be derived from purely observational data, instead of only from controlled experimentation. In the “big data” era, such observational data is abundant and being able to actually derive causal relationships would open up a wealth of opportunities for improving science, healthcare, and possibly predictive maintenance. In this talk, I will sketch how insights from statistics and machine learning may lead to novel approaches for robust discovery of relevant causal relationships.
Dr. Jan Braaksma (Director WCM Summer School/University of Twente)
Jan Braaksma is an associate professor in the chair of Maintenance Engineering at the University of Twente and the director of the WCM Summer School part of World Class Maintenance. He has worked for the University of Groningen (RuG) and the Dutch Defense Academy (NLDA). He holds a Master’s degree in Business and ICT and a PhD degree in Economics and Business. Jan’s research focuses on Asset Management with a special attention for Asset Life Cycle Planning, Maintenance Engineering and Design for Maintenance.
A significant part of his research is in cooperation with companies and organisations such as Liander, Strukton Rail, Netherlands Railways (NS), Prorail, Heineken, AkzoNobel, Sitech, Huntsman, Heijmans, Sabic, TataSteel and the Ministry of Defense. Jan is responsible for the Master Class on Maintenance Engineering & Management provided by the University of Twente.
Predictive maintenance can be seen as the most rewarding maintenance strategy. Therefore currently there is much focus is on the use of this strategy, however there can be debated that predictive maintenance is not always the most feasible strategy.
FMEA/RCM is a structured method which can be used to determine a maintenance concept based on the ways in which an asset can possibly fail and the impacts these so-called “failure modes” can possibly have. I will discuss the development and improvement of maintenance concepts over time and the need for the right asset information. I will have special attention for its usage for the identification of possible predictive maintenance candidates and criticality driven asset information management. The organization and management of (future) data collection is advocated as a crucial element for making better maintenance decisions.
During the colloquium I will go specifically into insides gained from a multiple case study on the use of FMEA in industry and insights I gained from working together with industry.
Prof. dr. Mariëlle Stoelinga is a professor of risk management, both at the Radboud University Nijmegen, and the University of Twente, in the Netherlands. Stoelinga is the project coordinator on the PrimaVera, a large collaborative project on Predictive Maintenance in the Dutch National Science Agenda NWA. She also received a prestigious ERC consolidator grant Stoelinga is the scientific program leader of the Risk Management Master, a part-time MSc program for professionals.She holds an MSc and a PhD degree from Radboud University Nijmegen, and has spent several years as a post-doc at the University of California at Santa Cruz, USA.
Predictive maintenance is a promising technique that aims at predicting failures more accurately, so that just-in-time maintenance can be performed, doing maintenance exactly when and where needed. Thus, predictive maintenance promises higher availability, fewer failures at lower costs. In this talk, I will advocate a combination of model-driven (esp fault trees) and data analytical techniques to get more insight in the costs versus the system performance (in terms of availability, reliability, remaining useful lifetime) of maintenance strategies. I will show the results of three case studies from railroad engineering namely rail track (with Arcadis), the HVAC (heating, ventilation, airco; with NS).
I will also go into recent developments on learning fault trees and rare event simulation.
Tiedo Tinga is a professor in Life Cycle Management at the Netherlands Defence Academy (NLDA) and in Dynamics based Maintenance at the faculty of Engineering Technology of the University of Twente. He has a background in Materials Science and Mechanical Engineering. Before joining academia, he has been working as a scientist at the National Aerospace Laboratory NLR for almost 10 years. His research focuses on the detection and prediction of failures in systems, using combinations of the physics of failure, thorough understanding of the (dynamic) system behavior, advanced monitoring techniques and data analysis. All his research projects are in close collaboration with industrial partners.
In this webinar a general introduction to (predictive) maintenance will be given. First the basic motivation for maintenance will be presented and an overview of various maintenance policies will be given. Then the more advanced policies, based on condition monitoring and prognostics will be discussed in more detail. The various options (model-based, data-driven) will be shown and the current status and challenges will be sketched. Finally, some case studies from our research projects will be used to demonstrate the potential and limitations
Geert-Jan van Houtum is Professor of Maintenance, Reliability, and Quality at the Department Industrial Engineering and Innovation Sciences (IE&IS) of Eindhoven University of Technology since 2008. Prior to that he filled positions as assistant and associate professor at the same department (1999-2007) and the University of Twente (1994-1998) and as visiting professor at Carnegie Mellon University (2001). He obtained his M.Sc and Ph.D. degree in Applied Mathematics from Eindhoven University of Technology in 1990 and 1995, respectively.
His research is focused on the maintenance and reliability of capital goods, and in particular on: (i) Design and control of service supply chains; (ii) Maintenance concepts, in particular predictive maintenance; (iii) Design for availability. He has over 80 publications in international refereed journals such as Operations Research, Manufacturing and Service Operations Management, IIE Transactions, and European Journal of Operational Research. He is area editor at Service Science and associate editor at Manufacturing and Service Operations. Much of his research is in cooperation with the industry. He works with companies such as ASML, Canon, Dutch Railways, Philips, Marel, the Royal Dutch Airforce, the Royal Dutch Navy, Thales, and Vanderlande. He is vice-dean IE of the Department IE&IS since September 2017. Further, he is a board member of the Service Logistics Forum. For a list of publications, see: https://research.tue.nl/en/persons/geert-jan-jan-van-houtum/publications/
About 10 years ago, we started with predictive maintenance research in the Netherlands. In my projects, we studied systems in the high-tech, maritime, and chemical industry. In this presentation, I present a general predictive maintenance approach that works for systems where a limited set of components causes most of the failures. This approach builds on stochastic processes, data mining, Bayesian learning, machine learning, and Operations Research techniques. We will also discuss what we can investigate in the coming 10 years. An important learning point of the past 10 years is that data analysis methods often lead to predictions with a certain percentage of false positives. That is often not good enough for users of systems to replace a component or module preventively. But these predictions can still be useful to be better prepared when a failure occurs.