ANALYSIS & OPINION
Tags: 

Picking up the pace at PLA

Robert Roe discovers that data integrity, process improvement, and managing data effectively are key challenges for informatics providers ‘finding the speed to innovate’, as the theme for the latest Paperless Lab Academy suggested.

Finding the speed to innovate requires more than just the introduction of technology, as the attendees of the Paperless Lab Academy, held in Barcelona, Spain, learned during the two day conference. Real innovation requires users to change their mindset, to manage the adoption of new technologies efficiently, and in some cases, to redefine how and why informatics data is captured.

During the opening sessions Peter Boogaard, the chairman of the Paperless Lab Academy, stressed that one of the most significant trends in the industry today is implementing practical strategies to convert scientific data into business value.

Boogaard said: ‘Automated self-documenting data capturing processes are becoming the standard best practice to increase data integrity. Using non-invasive, end-to-end strategies to create a full traceable process will connect science to operational excellence. Technology will be critical, but our ability to change our mind-set to enable this cross functional collaboration will be the real challenge.’

Changing one's mindset may seem trivial, but changing the mind-set of an entire organisation is a difficult task. Realising the benefits of paperless technologies requires both scientists and management to understand the potential for added value. This must be demonstrable, both in terms of time to science but also through potential economic impact provided by more efficient operations, Boogaard said.

Increasing data integrity

Once a strategy has been selected, implementation can take considerable time, resources and even additional staff training. On top of this is another layer of complexity specifically associated with paperless technologies in the laboratory, the requirements of data integrity and its impact on the validation and quality control process.

Recent clarification from the European Commission regarding personal data protection will more accurately apply existing standards to include paperless technologies as part of a validation process. At the very least, paperless technologies need to be able to provide the same level of validation as the paper-based technologies that they replace.

Management complexities

Another aspect of laboratory automation is the complex relationship between instrumentation and data repositories. The current system uses many different proprietary file formats, meaning that information is locked into an application.

Today’s reality for many laboratories is a half-hearted attempt to implement paperless technology. This generally includes some form of LIMS/ELN and some level of integration with laboratory instruments, each with their own data or file formats. This combination of different file formats makes managing these different streams of data much more difficult.

To combat this increase in complexity, there are two major strategies to streamline data acquisition and management. The first, the SiLA consortium, focuses on a single, unified format for laboratory instrument data. The second, called the Allotrope Foundation, is a pre-competitive consortium to design a ‘laboratory framework’ to improve the efficiency of data acquisition.

The SiLA consortium aims to establish international standards for connectivity in lab automation. So far this work is taking shape as a new interface and data management standard which allows rapid integration of lab automation systems.

Burkhard Schaefer, president of BSSN Software, stressed that the current system used in laboratories uses many proprietary data formats, making it more complicated than it needs to be. Schaefer said: ‘Today everything is file-based. We are moving files around, but, why these files? We have infrastructures in place, communication protocols, integration busses and so on. Why don’t we learn from the way that others are doing it, how other industries are doing it and use cutting-edge technologies.’

The SiLA consortium aims to streamline the use of different types of laboratory instrumentation by using this new platform to collect data from a number of different instruments in proprietary data formats. A conversion pipeline translates the data into the AnIML XML format, normalising the data into a single structure so that it can be shared and presented on different platforms.

Schaefer stated: ‘True integration requires two things, it requires communication, and it requires data being exchanged. It will only work if we have both, communication and data need each other. If we only have a communication protocol then yes we have a nice way of moving data around, but the payload is proprietary.’

In addition to the platform itself, a data mapping middleware is also being made available. This middleware allows users to propagate experimental data to third party tools such as LIMS, ELN, databases, visualisation tools, statistics software and ERP systems. This approach reduces the number of interfaces required to connect the different components in a paperless laboratory and facilitates interoperability.

The result is laboratory staff does not have to deal with proprietary file formats, making their data more accessible and easier to transfer and share, whether internally or with external collaborators.

Project unification

Another project to promote unified laboratory operations is the Allotrope Foundation, an international association of pharmaceutical and biotech companies dedicated to the building of a ‘Laboratory Framework’ to improve efficiency in data acquisition, archiving, and management.

The Allotrope Foundation is creating systems to support an intelligent, automated, and integrated analytical laboratory that allows scientists to focus on science rather than searching for, assembling, transforming, and reporting data.

An important part of this integrated laboratory is that hardware components are seamlessly shared between software applications, and one-click reports can be produced utilising data generated on any analytical instrument. While Allotrope and SiLA are approaching these issues with slightly different objectives both, aim to alleviate the unnecessary data management present in today’s paper-less laboratories.

Dr Gerhard Noelken, director of informatics at Pfizer and Allotrope Foundation, said: ‘When I walk through the lab there are still a lot of scientists that complain about their laboratory systems. “I cannot get to my data; I cannot easily link it to other information sources”.’

Noelken explained that, while the lab of the future is being discussed today, we still have a long way to go before we can fully realise its potential. This was a sentiment echoed by Boogaard, as any change to procedure requires that a company not only understand the challenge but also adapt their mid set to a new strategy based on paperless technologies: 'It is about having all of the contextual information. It is all about generating that extra value for the scientist and that extra value is really in the use of the semantic technology.’

The main difference between SiLA and Allotrope is that, while both focus on streamlining the use of data in the lab, SiLA focuses on a unified format for chemistry data whereas the Allotrope Foundation is trying to allow laboratory users to store and manage their data more effectively. One aspect of this process is the addition of contextual metadata to give users a clearer representation of the potential value of a data set.

Driving efficiency through process improvement

While data integrity is a key challenge, the PLA event also highlighted the need for process improvement. Several speakers at the event noted that for larger organisations, efficiency savings and process improvement can provide the biggest benefits.

IDBS’s Paul Denny Gouldson discussed the myriad of choices for informatics users (usually requiring a three or four letter an acronym). He argued that it is not the reliance on acronyms that we should be focusing on but the which features are required for an organisation to get the most out of its workflows.

A user’s needs ‘has to be based on requirements and then capabilities required to deliver those requirements’ said Gouldson who went on to explain that the foundation of these activities was efficient data management – the ability to capture and manage all the different types of data associated with studies and samples. The trick comes in exposing this to all the different users and workflows in a succinct and simple manner.’

Matt Harrison, IT strategy and portfolio leader at AstraZeneca, shared this view that data management was the key to the success of informatics companies; however he argued that for large organisations, such as AstraZeneca, it is insight that ultimately provides the most benefit: ‘When you listen to some of these talks at these sorts of events there is quite a lot of emphasis put on efficiency and the potential for going faster. Of course, going faster is very important but when you truly look at what adds value to a company like ours, operational efficiency is there, but it is probably the least important in terms of adding value. The real key here is insight.'

Harrison explained that for companies like AstraZeneca, with around 900 scientists creating information, there is a huge challenge around making that process easier from a regulatory perspective. AstraZeneca has considerable experience with this kind of deployment, having recently finished its implementation of a fully integrated laboratory informatics platform across its entire business.

‘How do we pull all of this together, knowledge data and insight and convince the scientists that they are not generating data for themselves but for an organisation?’ asked Harrison.

Putting a price on change

One aspect of this, as Peter Boogaard highlighted, is changing the mind-set of an organisation. At first, this can mean demonstrating the value of change so that it can be accepted by an organisation.

Harrison explained that during implementation of AstraZeneca’s new paperless system, researchers sill had reservations about the potential benefits: ‘The scientists say the data is too hard to structure so the cost to do it is too high for the benefits on the other side.’

However, as with the case for AstraZeneca, there is a further challenge even once a system has been implemented, to drive researchers to make the most of these new technologies. This can be done in a variety of ways, from reusing information across different projects to using archives of previous work to help infer an intelligent decision, for example, to explain why a particular protocol might fail. These insights, Harrison argues, are not possible without a specific informatics platform which allows users to develop and use the insight that can be provided by efficiently managing data.

The technology for paperless informatics is available today; the informatics community must now take the next step is to embrace these technologies. Without full scale adoption by the wider community, it will never reach the levels of adoption necessary to drive further technological innovation.

Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

Sophia Ktori highlights the role of the laboratory software in the use of medical diagnostics

Feature

Gemma Church explores the use of modelling and simulation to predict weather and climate patterns

Feature

Robert Roe reviews the latest in accelerator technology and finds that GPUs and coprocessors will be key fixtures in the future of deep learning

Feature

Robert Roe finds that commoditisation of flash and SSD technology and the uptake of machine learning and AI applications are driving new paradigms in storage technology.