arcadvisory

Will Machine Learning Eat Historian?

Blog Post created by arcadvisory on Jun 24, 2016

By Peter Reynolds

 

For most process industry Owner Operators, there remains a need to predict precisely when an asset will fail or detect when a process will degrade – while eliminating false positive diagnoses. Most single-variate monitoring solutions such as condition-based monitoring have played a role but have been susceptible to the generation of false positive diagnosis and lack of acceptance by operations. Other standard tools used by maintenance engineers or process engineers have been the plant historian desktop tools. Although these have been useful, users have had much difficulty creating context and providing predictions without considerable time and effort by an end user. Machine Learning technology provides an opportunity to change this game.

 

Production availability and asset uptime – while providing the lowest possible cost for their customers – remain as key imperatives for owner-operators. To help solve this, there has been a recent emergence of new technology applied to process data.  Machine learning, although not new, has not been applied to process data. This new breed of technology mainly leverages the time series or process historian data but also incorporates other unconventional data sources. Machine learning for industrial process applications typically uses software and algorithms that look for non-obvious patterns in process historian data. Patterns that are only observable by a trained eye or expert in the process or engineer familiar with how assets perform and behave in the physical process.

 

Generically speaking, Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed”. For the process industries, most of the data already collected in process Historian infrastructure. Since Historian tools are not easily searchable, or not optimized for write access, so programming a historian is not possible.  Machine learning solutions are essentially “re-indexing” historian data  to provide this platform to apply machine learning algorithms. Algorithms that can detect process conditions and asset anomalies. Machine learning is designed to detect the extremely early onset of degradation through multivariate differences across all the streaming signals and also temporal distinctions. Little changes in process signals offset by time. When properly conditioned or “trained” machine learning algorithms can recognize various operating and failure modes as very precise patterns in the signals.

 

Machine learning can identify normal conditions, abnormal conditions, and exactly match patterns that indicate degradation and impending process excursions or asset failure well before they happen. It can predict an accurate time-to-failure indicating precisely WHEN a known failure will occur, HOW the failure will occur, and WHAT to do about it; derived from Prescriptive advice such as the exact failure code directly linked from the EAM system. Knowing the precise, multiple days’ or weeks’ lead time to a failure allows the end user to determine the exact action necessary (often through discussions between Operations, Maintenance, Technical and Planning/Scheduling Departments). Such Prescriptive action enables the best remediation and timing decisions to avoid damage altogether, prevent a breakdown, and solve the problem in the most efficient manner.

 

Some important points I have noted about the emergence of machine learning market for process industries:

  • Historian infrastructure has been designed for optimum write access and provides limited functionality to enable search, query, and predictive or prescriptive analytics.
  • The use of machine learning techniques, algorithms, and computational learning, artificial intelligence may have the ability to change drastically the ability for owner operators to predict and prescribe both process and maintenance events that impact the process industries.
  • A range of emerging solution suppliers has that provide historian “bolt-on” process analytics without the need for a data scientist.
  • Many generic analytics solutions currently on the market require a significant investment in data science resources, while a few have built solutions that are not time-consuming to set up and maintain.

 

In summary, Historian infrastructure will likely remain as a significant data source for predicting outcomes and provide a platform for data cleansing.   Engineers and Operators  may eventually migrate over to machine learning and analytics tools that provide answers more efficiently.  Also, many generic analytics companies claim “prediction” can be done with only streaming data from target systems. The challenge for process industries – a prediction using machine learning needs an extraordinary amount of time to “train” the system. Since major equipment failures happen in intervals of years and major process upsets happen infrequently, to properly set up a machine learning or analytics solution you need to feed it the entire historian archive sets – which could possibly be 10 years of time-series data.

 

So will Machine Learning eat Historian? – Yes, for breakfast.

 

"Reprinted with permission, original blog was posted here"

About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact nsingh@arcweb.com

Outcomes