We are all used to hearing about big data, and how, by bringing together every piece of information about a problem, we can apply artificial intelligence and predict future trends. I often hear how this can be simply applied to maintenance problems, and we will suddenly achieve the nirvana of predictive maintenance, with total visibility of failures before they occur.
Whilst there are examples of this in specific use cases, typically of high value, homogenous asset fleets where the cost of data is a fraction of the cost of the asset or the cost of failure (e.g. aero-engines), there are substantially fewer success stories in industry. There are a number of reasons for this:
- Data sets are expensive. Techniques such as full wave form vibration analysis and motor current signature analysis are very mature and are widely deployed on critical assets. However, a single vibration point is likely to cost in the order of £1,000, making mass deployment to medium criticality assets cost prohibitive.
- Merging different data sets is technically challenging. The issues around integration operational technology (OT) and informational technology (IT) are significant: different architecture; different protocols and different approaches to security. Even once the connectivity has been addressed, sensor data will be accurate to the millisecond, but often failure data is reliant on human data capture and may only be accurate to the hour.
- There’s a lack of failure events. Predicting failure using AI relies on having large data sets of leading indicators of failure and the associated failure events. However, maintenance professionals work hard every day to prevent and mitigate failure, capturing many potential failures prior to them occurring.
- Hidden failures have no leading indicators. Many potential failures are hidden, with the failure only being apparent when you operate the system. These failures are found through regularly testing the functions of the system. In this case, not only is there a lack of failure data, but also a lack of indicators.
To mitigate the impact of these issues, the concept of small data can be applied to predictive maintenance. Instead of looking in terms of massive new data sets and complex predictive models, look in terms of extracting as much value as possible through limited data sets, and using this to identify where to deploy resources, either labour or investment, to understand the problem more fully. Instead of looking to predict failure, look to identify abnormalities. The steps to address these are:
- Identify what data you already have - which is often a lot more than you think. Modern field devices are full of data, but it’s rarely utilised.
- From this, choose the parameters that could tell you what is normal or abnormal. For example, monitoring current, voltage and phase angle for all the phases of an induction motor will tell you a lot about how things are changing over time
- Use a suitable data tool to identify normal conditions during a training period. At its simplest you can do this in excel, or any of a range of statistical tools
- Once you’ve identified normal, you are only interested in abnormal readings - use these to trigger deeper investigation through, for example, full wave form vibration analysis, thermography or oil condition monitoring
- Depending on the data tool you use, you can feed back into the model the outcome of the human investigation to improve the validity of the abnormality detecting
Focusing on small, rather than big, data has many benefits through reducing the cost, technology and organisational barriers to entry in using data in a smarter way to enhance your maintenance effectiveness.
No comments yet. Be the first to comment!