The chemistry of real-time analytics

Share this on social media:

Lloyd Colegrove

In advance of his keynote speech at April's Paperless Lab Academy event, Lloyd Colegrove highlights some experiences from 30 years of engagement in chemical manufacturing

I hold a doctorate in Chemical Physics from Texas A&M University, where I was fortunate to work in an R&D group that required synthetic, spectroscopic and quantum mechanics (read 'computational') skills, back in the day when this was not the norm. I spent almost 30 years at the Dow Chemical Company, starting in R&D, but suffering from boredom decided against my mentors’ advice and entered the maw that is chemical manufacturing to see if that would be interesting. 

Within six months I discovered an ugly truth – engineers and operators didn’t know how to handle data or data uncertainty and because of this they were making poor decisions in running the plants based solely on the next data point. Also, that many of the quality labs were almost screaming at operations that there was a problem looming, but because of what the engineers and operators were looking at in their GPI screens, they didn’t believe the labs. There was general frustration all around.  While I had a day job initially of supporting the global plant lab network with improvement projects, and helping to problem solve individual plant problems around the globe, the understanding of data and how to use it became my primary interest and set me up for 22 years of rewarding work.

Before I got into real-time data, one of the first unique things I did was invite a young colleague in on a project. He was a chemical engineer with a doctorate in data science and he had special skills. We applied, for the first time, a chemometric model (today this is referred to as artificial intelligence (AI), back then we were not aware of the term) to a batch plant that allowed for multivariate discernment in the lab data. This project revealed random, yet systemic problems in the batch production. While it took two years to solve the mysterious problem that had existed for decades unnoticed, we used the model to protect our customers from receiving certain batches of material that were 'within specification' from a univariate approach (the normal way of looking at the data) but was way out of normal from a multivariate view (something no one had ever looked at in the chemical industry before).  We published the first paper on this application in 2007. This greatly increased the lab’s ability to support the plant and gave them much greater credibility of their measurement process which was lacking analytics proof prior to this project. 

As teams I developed gained speed within the business, I was given greater resources to go after another problem – one that I will demonstrate in the PLA talk. The problem is data, and too much of it. No human being can manage all the data a plant generates. There is value in all that data, but much of it goes overlooked or just plain ignored in the running of the plant because of the sheer volume of minute and shift data. The only time engineers might look at a broader data set is after a problem has occurred that slows or shuts the plant down. I reasoned that if you can tell after the fact that you had a problem, and understand it just by looking at more data, you should also be able to not only see the problem 'in flight' and address it (i.e. 'real-time analytics', but possibly (with AI) predict and or anticipate the problem and address it much earlier. 

To this end I went to my friends at Northwest Analytics and together we collaborated on developing their FOCUS Enterprise Manufacturing Intelligence suite.  For the first time, plants were able to look at much larger grouping of data, and see – real-time – how the data was behaving in the plant. This led to improvements in the plant operation almost immediately, and there was clear value created – less fouling, elimianted 'surprise' events, more eyes on the data (experts from around the globe could look in very quickly at how a plant was running and make a phone call if they saw something that concerned them). The list goes on.  One of my sites very enterprisingly installed FOCUS EMI as a monitoring system for the plant labs. Their idea: if you can monitor a plant while it’s running, why not monitor the lab instruments real time in the same way?  Absolutely brilliant.  A lab technologist comes in from a weekend away with friends and family, and can – in seconds – ascertain the health of their measurement systems – and plan their day or week. And, if they like, they can troubleshoot at home from their smart phone if necessary. The tool is used to track and monitor calibrations as well as process measures. It makes the lab more efficient, and less costly to run.

The irony of AI in the chemical industry is that it hasn’t changed as much as I would like over the last 30 years. The issue lies in the fact that AI was already present in the industry doing a task that chemical engineers assigned it – that of supporting open and closed loop process control and use in on-line or at-line analysers. The mathematics of AI was not used for problem solving and problem avoidance – which should have been the next step after mastery of process control. The chemical industry started down this road in the 80s and has only grudging expanded the use of the mathematics  to go beyond the control and on-line sensor arena. At Dow I became the 'personality' that pushed for the additional capabilities that I first employed with the invaluable help of my colleagues and others that we hired as we expanded the work. The implementation of AI is a challenge. For those who say that chemical plants will be run by AI, I (and others) say 'not so fast'. There is not a data set complete enough that an algorithm to run a plant without human monitoring and intervention. It is for the humans to learn to use AI to help them run the plants, not the other way around.

Where the industry has changed dramatically is in the focus on real-time analytics and anticipating and planning for problems that are to come, rather than doing a post-mortem on a plant after a problem caused the plant to fail. This is a key difference in chemical production today. Real-time analytics helped my global labs and plant engineers and operators bolster their understanding of the plants and the data, and increased the credibility of the information coming out of the labs. It’s a fight – changing habits and perspective on data visualisation and manipulation is very difficult in the engineering and plant operation community, but the industry continues to move forward in this area that has positive impacts on people, environment and our customers. 

Lloyd Colegrove recently retired as the director of data services and the director of fundamental problem solving within manufacturing and engineering. He was also the analytics platform director for Dow’s Manufacturing and Engineering’s Industry 4.0 program

Paperless Lab Academy will take place at Lake Maggiore, Italy, on 26 and 27 April.