Skip to main content

The key layers of a laboratory paperless strategy

In the new era of the internet of things and artificial intelligence, the majority of laboratories still have a long way to move from paper-based processes to paperless ones.

The electronic data life cycle, as it is described in several regulations and documents used in paperless projects, can be divided in four layers of data, information and activities: eConnect; eManage; eDecide and eArchive.

These keywords refer to initial capture of data, the data management to create useful information, the decisions taken based on information and data available in the lower layers and, finally, the electronic data archiving to ensure long-term availability of the information and the related data.

Those are the four-main streams that will once more be discussed at this year’s Paperless Lab Academy. The annual European event aims to become a learning platform for anyone looking to consolidate, integrate or simplify their data management systems.

‘eConnect’: effective workflows based on self-documenting data capture strategies

Even if data integrity is a critical aspect of the entire data life cycle, data capture requires a strong focus from both the inspectors and auditors. Most lab instruments are now offered with intelligent software embedded into them. Labware and sensors are beginning to embrace the internet of things, ensuring the collection of the raw data and the related metadata which can then be transferred to the next phase of the data life cycle.

Several laboratories are using instruments which are not able to connect the current platforms. While searching for the business justification for their replacement, intermediate solutions should be considered to generate digital inputs and reduce paper-based processes and manual transcriptions. The goal is to reduce the manual documentation, the risk of human errors, and more importantly, to maintain the information about the source that has generated the raw data.

The raw data may be a critical part of the activities performed in the systems of the upper layers. Data management and the creation of meaningful information and decisions should be always taken with the possibility to go back to the original data from the system in which it was generated.

Finally, while in this first stage of collecting data we should not obviate the ones coming from collaborators. Collaborators are generators of data and potential sources of information. If external organisations such as academic contributors or outsourced services from CRO and CMO are generating the data, it can create immediate security concerns. With the latest GDPR considerations, we need to incorporate data protection assessment at least on the most vulnerable data.  

By May 2018, companies will need to design their processes and also include serious considerations on cybersecurity protection to avoid any risk in losing data. 

‘eManage’: generation of meaningful information from trusted data

The ‘translation’ from data to information is the key principle of this layer of activities typically performed in the most well-known systems The real challenge in the new era of Internet of Lab Things (IoLT) is not about picking the right acronym for the lab. The challenge lies in identifying the right solutions that provide answers to a series of requirements: secured connectivity without large investment; usability with limited customisation; ability to share information using the newest technologies; mobile devices and web-access without performing complex platform implementations; and the possibility to use the software as a service.

We are observing a large market transformation in this area. The presence of systems which are offering a large set of functionalities and product offerings based on new technologies.

Multiple software modules adapted to specific laboratory activities and software platforms allow the creation of personalised solutions with no need to customise but rather configure the system to the needs of the user.

This revolution will generate large benefits for the laboratories because the selection of the solutions will be based on the needs rather than the capabilities.

These modules should respond to a few critical requirements in order to become part of the ‘solution’: easily connectable to the ‘eConnect’ layer; easily connectable to modules of the ‘eManage’ layer; easily accessible from browsers and mobile devices; and easily accessible from the ‘eDecide’ layer.

What is the end goal?

On one hand, the final goal should be to interface the ‘solution’ with the multiple generators of raw data in order to enable the review directly at the source at any time. Additionally, the possibility to exchange information between the modules of the ‘eManage’ layer, in a flawless manner, should allow the access and interpretation of all data to generate meaningful information.

The possibility to access the ‘modules’ from any remote location or even from mobile devices in order to manage all the information in the shortest period of time. The possibility to provide aggregated information to the next layer of systems where decisions are taken.

Is this real? Absolutely. The technology has evolved to the level that all these goals could be reached. Numerous solutions have already been implemented in various markets where they are using the newest technologies. The laboratory informatics systems will have to be ready for this new era too.

‘eDecide’: Rapid decisions taken from meaningful information

In the everyday activities of a laboratory, we are getting used to perform them very rapidly and decisions should be taken in short time. Little remains available for data review, approval of data and creation of related documents. The request coming from laboratory’s customers, both internal and external is a prompt answer.

The removal of manual processes, of paper-based activities and mix of information sitting in different systems is essential for taking faster decisions. Only paperless processes shorten the periods of review of the information and ease rapid decision-making which can then be communicated immediately to the relevant stakeholders. 

New approaches like the review by exception are helping to increase the efficiency of this process.

The laboratories that are able to respond to these requests on time and with the adequate level of quality will transform from cost centers to value generators.

Decisions should be taken according to the available information. Today many software providers offer simple tools presenting the information in a graphical view, showing the outliers, highlighting the areas of attention, allowing the ‘drill-down’ approach when needed.

Fact is that solutions providers, integrators and customers are joining efforts in organisations like the Allotrope Foundation, Pistoia Alliance, SiLA consortium to consolidate outputs and tools, that could one day lead to the creation of one single user interface, one single way of showing the information in a unique and personalised dashboard.

Simple reports created automatically overnight and available in the ‘eDecide layer’ first thing in the morning. A new ‘control room’ of the laboratory where decisions are taken to correct situations not in line with the expectations, where scheduling changes are adjusted to ensure that the activities are completed on time, on budget and according to the customer expectations.

Is this real? Yes, again. Great reporting and business intelligence tools are now available to integrate the information coming from different systems and present in a simple and graphical way. All what the managers need at their fingertips. Moreover, these tools are able to dig into the underlying systems to view the information and related raw data, when needed.

We will finally see one single screen open on the computers of the managers instead of multiple windows jumping from one system to another in order to desperately collect all the necessary information required in a given moment for a given decision to be taken urgently.

‘eArchive’: essentials to secure long-term multi-departmental archiving

A key objective in operating with efficient archival approach is to reduce the challenge of finding the right data. Considering the growing digital universe, archiving can no longer be left behind in a project and considered only once it is too late. Nowadays, we often hear about concerns on legibility and format consistency along the time for a given retention time that might end up requiring access to obsolete technologies.

Archiving should be approached and designed to reduce multiple types of risk: knowledge limited to one critical person, security and loss of data.

A comprehensive archiving protocol should eliminate the struggle to find the data to the point of desperately looking for the person owning the knowledge of where it is. A corporate master data management and vocabulary model should support correct management and archival, facilitating a flawless track record of the data. 

During the Paperless Lab Academy 2019, several presentations will focus on this item that too often is approached too late in a ‘paperless’ project. The archiving strategy requires a clear definition of the business requirements and, also the potential technical challenges.

The ability to archive and then retrieve unstructured data is becoming an urgent need which must be solved in R&D laboratories. Solutions providers are dedicating resources to this matter and positioning their data management software to address the need for better archiving and retrieval. Above all, the ‘eArchive’ strategy is one that requires stronger alignment within the whole company in order to build up a reference master data management strategy at an enterprise-level.



Topics

Read more about:

Laboratory informatics

Media Partners