Skip to main content

The Dangers of not automating data marshalling and management

A staggering amount of analytical chemistry data is generated in modern drug development and chemistry labs to help characterise materials, understand processes, and ensure robust controls. Where data—the proof behind conclusions and go/no-go decisions—is king, efficient data management is crucial for driving scientific progress. Without it, organisations are cursed with duplication of effort and limited collaboration that results in reduced productivity and slowed innovation. Automation of data marshalling and management can change the way data is handled, analysed, and utilised to significantly enhance the efficiency and effectiveness of R&D. If automation is not already part of your digitalisation strategy, consider the risks.

Inefficiency and Lower Productivity

Traditionally, managing and analysing analytical data has been a labour-intensive task. Scientists spend considerable time on data admin tasks including manually transferring files and analysing data to extract answers. When today’s fast-paced environment requires efficiency and productivity scientists should be concentrating on their expertise—drawing conclusions from data and planning next steps—not on data handling.

A key benefit of data automation is its ability to expedite data analysis. Timely analysis of data is crucial for making informed decisions. By automating data analysis, researchers can rapidly identify chemical outcomes, correlations, and trends to keep project timelines on track.

Lower Quality Data

Data automation improves data quality and reliability. Manual data transcription and manipulation are prone to errors, which can have severe downstream consequences. Automating data collection minimises the risk of human error and bias, ensuring that the data used for analysis is accurate, consistent, and unbiased.  Additionally, automated validation checks can be implemented to flag statistical outliers, allowing researchers to focus on key data.   

Inadequately Engineered Data

R&D is a multidisciplinary endeavour that involves collaboration across labs, departments, and global sites. Data marshalling plays a critical role in integrating data from various sources (making it suitable for storage and transmission), and ultimately use and re-use. There are many considerations to ensure that data is engineered so that it is sufficient for all users and potential applications. Manual data marshalling cannot cover the breadth and depth of data requirements in today’s R&D labs.

Each stakeholder may generate and manage data in different formats, and the disposition of the data may also differ (from discrete data files to SQL databases).

Not only is the source of data critical; its destination must also be considered. Data sources are typically on-premises and local machines; most organisations desire to leverage the advantages of cloud computing and virtually unlimited cloud storage. While this may not pose an issue if data file sizes are small (less than 100 kB-20 MB), the problem becomes more complex when data files reach GB sizes (image data, high-resolution mass spectrometry data, etc.). There must be strategies in place to address this. Streaming data from instrument to cloud for large datasets has technological limitations, primarily network disruptions that could corrupt data. An alternate approach could be implementing edge computing strategies that allow for data to be automatically marshalled to a centralised location closer to the data source for immediate use; but later to be synchronised to cloud data servers for the organisation to access.  

Automation may also be used for data assembly. Often, molecular characterisation requires more than one analysis technique. However, these datasets are acquired across vendors, instruments, and labs. In more complex downstream drug development studies, such as process control assessments for drug development or drug metabolism studies, several data files must be assembled to provide the overall chemical story. 

Organisations are looking to employ machine learning (ML) and artificial intelligence (AI) but ML/AI frameworks require input data to be engineered and formatted in a specific way for consumption. Automation allows for critical data to be abstracted from analytical data in the desired format, for it to be organised, and its content specified.  In this case, automation is paramount, as data must be consistent to be used in ML/AI frameworks. 

Value vs. Effort

Automation of analytical data marshalling and management allows for a standardised framework for organising and harmonising diverse datasets, enabling seamless data integration, assembly, and analysis. An automated approach to data management can make it available throughout R&D organisations in a way that manual data transfer cannot. While people and their attitudes are the most difficult change to affect, their expertise and the data they generate is far more valuable than hours spent manually moving and managing data.

Media Partners