Skip to main content

Creating knowledge-rich value from data

Advanced Chemistry Development (ACD/Labs) shares this 20th anniversary with Scientific Computing World, having been established in 1994 with a handful of chemistry-focused software applications. Our focus and product portfolio have changed to reflect customer demands such as improved productivity, knowledge retention, broader global access to information, and an overall need for faster chemistry discovery and insight. Computing devices have become an integral part of the chemistry research and development process and are no longer viewed as a panacea, nor a threat.

The business environment is demanding and increasingly global, a trend likely to continue. Twenty years ago, customers used our predictive modelling software to help with insight; and to eliminate erroneous directions by, for example, predicting the physicochemical properties of future drug candidates that might adversely affect product bioavailability. While still relying on various predictive models and interpretative/analytical tools, customers are now also demanding instant access to existing knowledge to drive decisions. Scientists and their organisations want real-time access to their colleagues’/partners’ findings whether they are down the corridor or overseas. Easy access to legacy problem-solving is also a valuable resource for more efficient science.

Similarly there is a growing demand for intelligent ‘silent automation’ to remove non-value added tasks. Scientists complain they have no time to be inventive and insightful under the barrage of tedious routine duties.

ACD/Labs’ recent Symposium on Laboratory Intelligence brought together scientists from various industries, and automation was a hot topic. Chemists from GlaxoSmithKline, Eli Lilly, AstraZeneca, and Lexicon Pharmaceuticals chaired a panel discussion on Automated Structure Verification (ASV). This technology automates the process of chemical structure verification/confirmation by combining automated processing of data and knowledge-driven interpretation, to evaluate the match between a chemical entity and acquired spectral measurements. ASV has been employed to evaluate the quality of outsourced or commercially available synthetic building blocks (a problem that has gained widespread attention in recent years) and is useful in reducing the heavy burden of routine spectroscopic data interpretation in analytical labs. The panel agreed this technology is currently performing at the level of a trained chemist, and is ‘tantalisingly close’ to having routine applications evaluating large quantities of data that labs simply do not have the capacity to handle. Deployment of such ‘silent automation’ gives scientists more time to focus on complex problems.

Both personal and professional

Access to information has changed dramatically with our ability to instantly ‘Google’ any question we can think up. In the chemistry world, we need similar expectations as to how our scientists access information, data, and knowledge. I can find out what Churchill wrote to Stalin in the 1940s, in 0.43 seconds, but have to work harder and wait longer to find what my colleagues are doing.

Chemists get enormous volumes of data from instrumentation and IT systems. But, can they find what they need quickly? To me, the problem is not in too much data, but the lack of specialised data organisation, clever access, and intelligent comprehension. This is where technology should and will go in the future.

There is a lot of talk about Big Data concepts, and the benefits of cloud technologies. Ten years from now there will be different concepts offering new benefits; however, the fundamental challenges worth solving are: organisation and access to relevant data (‘small data’, sitting in silos all over the globe, begging to be used productively); and speed and readiness of access, retrieval, and reuse.

Many chemistry organisations are working to define which data is needed, and how to normalise, structure, and standardise it to the degree needed.

An example of a ‘small data silo’ is interpreted spectral characterisation knowledge. Gaining seamless access to it can result in notable ROI. A renowned spectroscopist recently told us about a request he received for several impurities to be re-characterised urgently under pressure of FDA data submission. The work had originally been done by a CRO partner, but no analytical data and knowledge had been transferred between parties. It required extraordinary effort from skilled scientists, compounded by extra expenditure, to respond to the inquiry. A spectral knowledge management and collaboration network focused on impurity resolution would have enabled timely reporting with little effort. As we work on bringing this spectral access to servers and clouds, instant view into joint studies is a reality.

Chemistry-driven industries use sophisticated instrumentation to provide more and more data, and they are increasingly global in nature. The next decade will be about helping chemists create knowledge-rich value from that data, and about intelligent reuse of the knowledge. It is about a finely designed data architecture that understands the needs, specific objects, and intellectual ways of a chemist user, to enhance their creative abilities by delivering all the required information and computer assisted suggestions through their future device.



Topics

Read more about:

Laboratory informatics

Media Partners