Edit This Guide Record
Guides Technology Process Machine Learning Takes Years (of Data) to Accurately Predict an Outcome

Process Machine Learning Takes Years (of Data) to Accurately Predict an Outcome

Published on 07/06/2017 | Technology

212 0

Peter Reynolds

Contributing Analyst. ARC Advisory Group

IoT GUIDE

For process engineers or maintenance and reliability engineers who are embarking on a technology selection of machine learning tools predictive strategies, here are some ideas to help you in your quest.

If you are looking for a solution that uses vibration measurements as a means of predicting health of rotating equipment in the process industries  – then you need to look deeper. For rotating equipment, by the time the vibration alarm occurs, the damage is done. This single-variate, vibration-centric threshold or rules based monitoring should be part of the solution, but not the only solution. This is condition-based monitoring and has been around for decades. Prediction of a future unplanned event  needs to consider  the upstream and downstream process variables that impact the plant asset. In other words, how is the asset operated in the process?

The emergence of big data and analytics techniques, coupled with the steep rise in capability of computing has led to the development of machine learning solutions. Machine Learning algorithms are designed to predict process and mechanical outcomes.

Machine Learning is a form of Operational Analytics. Three things make machine learning different from predictive analytics. Machine learning applications are self-modifying and highly automated. That is, machine learning algorithms are designed to adapt continuously and improve their performance with minimal human intervention. Process workflow also embeds Machine learning algorithms. That is, they become seamlessly integrated into the process to the point where they are invisible to the user or operator. Machine learning algorithms are truly in their element solving problems that are just too difficult or complicated for human programmers to code.

In my blog post last month, I wrote about the changing information architectures for process automation in the article “Will Machine Learning Eat Historian“. These can be challenging words for engineers and operators who religiously comb through trends and custom Microsoft Excel sheets to analyze data. The most interesting part of Machine Learning is the fact that it takes this effort away. It provides a platform to capture knowledge of process anomalies and events, and highly automates the laborious task of data mining, which is why that machine learning will eat historian. Perhaps not entirely replacing this information architecture, but at least consume the data. Process Historians are good at storing data from many target systems and provide a good foundation for data cleansing. After all, process data is far from pristine and contains many gaps and errors in collections.

Now back to the title of the blog and why machine learning takes years (of data). The Abnormal Situation Managment Consortium and the Control of Major Accidents Hazards (COMAH) have researched and publicized the fact that major incidents are becoming less frequent, but occurring with a much higher consequence. The data that could be used to predict these outcomes is likely already sitting idle in your process historian. Sure you can begin to use machine learning and feed only streaming data, but the important events are buried in the time series data that is in the process historian archives. The Machine Learning and Analytics suppliers that understand this well have solutions that are designed to ingest and re-index your entire process history records. (Accomplished using a “bolt-on solution” or an enterprise distributed computing platforms such as Hadoop or Spark).

Finally, when assessing the market ask your machine learning vendor, “does the solution use only streaming data or the entire set of offline and online archives”?

This article was originally posted on ARC Advisory Group.

test test