Skip to main content

2017: A year of disruption in the laboratory?

While some laboratories still cling to traditional processes and paper-based workflows, there is huge potential for a new world of integrated technologies that can enable new research or increase a laboratory’s throughput or capability for collaboration.

Many of these technologies have been around for some time but are now becoming a reality through a convergence between availability of applications, data, IoT technologies and AI/machine learning capabilities, all tied together through cloud computing.

While technology is not one-size-fits-all in the laboratory, there are lessons that can be learnt from informatics providers and their users who are making use of these technologies to disrupt traditional laboratory processes.

Integration is a theme that has been running throughout many different labs over the past 10 years, but in the past this focused on creating ties between instruments and LIMS/ELN systems or connecting labs with collaborating organisations.

Today integration now refers to the creation of smart laboratories, but, also, creating comprehensive software collaborations, tying disparate data streams for translational research or enabling new drugs or diagnostics and the use of AI and machine learning with life science research.

Experts in many disciplines from life sciences, to chemistry or statistics have come to broadly similar conclusions – that integration is the key to overcoming future barriers to innovation.

Integration can help break down barriers between disciplines integrating knowledge from different domains; it can be used to increase an organisation’s capability for collaboration or to receive data directly from the field through the use of IoT devices. 

Even at the most basic level, integrating laboratory instruments and LIMS or ELN systems helps to reduce errors in data collection and processing.

Klemen Zupancic, CEO of sciNote, commented: ‘It is not individual tools that will revolutionise science; it is all of these tools that are available working together, integrating and talking to each other. We need to step together as a community, promote collaboration and help each other for better science. IoT is coming in the labs and with that comes automation, traceability, transparency and reproducibility.

‘It is important that we, as a community, adopt this mindset of using technology to our advantage and recognise the benefits it can bring.’

Collaboration through integration

The failure of a candidate drug can cost millions of dollars in wasted research – for this reason many chemists are now turning to software that not only provides modelling or predictive capabilities but multi-parameter optimisation that can aid decision making, leading to more efficient use of resources.

For every successful compound, thousands of potential drugs fail, so it is of the utmost importance that compounds are carefully selected. Some of this risk comes from knowing which compound or series of compounds to choose for a project but, as Optibrium’s CEO and company director Matthew Segall explains, uncertainty in the data – if not well managed – can lead to wasted resources.

‘There are a lot of different end-points measured or calculated and many different compounds or chemistries a project will explore – but a point we emphasise, that we believe is underused, is uncertainty in data – very significant uncertainty,’ said Segall.

This combination of complex parameters for a drug development project – the uncertainty in data, and the huge list of potential candidate drugs – were primary factors that drove Optibrium’s decision to develop ‘decision analysis methods to help people navigate through a very complex landscape of data,’ said Segall. ‘The goal is to prioritise compounds and to understand the structure activity relationships that are driving activity and other properties within the chemistry.

‘Everyone knows the value of downstream failure. If you pick the wrong chemistry and push it forward, you can end up with these incredibly costly late-stage failures.’ But Segall stresses this is a hidden cost, which is how many potential drugs have been missed, due particularly to the uncertainty in the data.’

This lack of understanding around uncertainty can lead scientists to make decisions that are not supported by the data that is available.

As drug development projects become increasingly complicated with multiple parameters that need to be optimised, this uncertainty can be an acute stumbling block, or, as Segall explains, it can be used to a chemist’s advantage.

Drug development relies on a scientist’s ability to manage hugely complex streams of data on any number of compounds of interest to a particular project. To keep up with all of this data and make effective decisions requires the use of sophisticated software that can alleviate some of the pressure from drug development projects. 

However, some companies recognise their expertise lies in a particular area and so work to ensure complementary software packages can work together, so users have the choice to pick and choose software right for them.

‘We develop a lot of technology in-house, but as a company we recognise that no one entity can develop all that is cutting-edge in every area of computational chemistry and cheminformatics,’ stated Segall. ‘We actively seek partners that are leaders in their space to bring the technology into their software environment and make the interaction as seamless as possible for the end-user.’

It is this acceptance that many specialist tools are required, which helps to create an environment where several highly specialised software packages can be used together to create an effective platform for drug development.

‘We have partnerships with Collaborative Drug Discovery (CDD), The Edge and Certara. Their platforms are being used for ELN or storage of databases to gather, reduce and store data for drug discovery projects. We work with them to ensure that our software works seamlessly with theirs,’ said Segall.

To maximise this integration, Optibrium aims to provide integration with their software and that of partner’s organisations, removing the need to manually correct data formats, data exportation and formatting. ‘That is a big part of our philosophy, as well as being very agnostic to where people will get their data – we want to make that process as easy as possible,’ said Segall.

Precision diagnostics

Precision tumour diagnostics firm Helomics established a clinical CRO operation at the start of 2017, and now offers pharmaceutical, biotech and diagnostic clients a range of services for clinical and translational research, spanning biochemical profiling, genomics, proteomics and cell biology. ‘It’s an extra revenue stream on top of our clinical diagnostics business, and allows us to leverage our proprietary patient-derived tumour models to help develop new therapies,’ noted Mark Collins, vice president of innovation and strategy.

Integrating these technologies requires significant data infrastructure. There are now many diagnostic firms providing specialised services beyond the scope of the traditional CRO’s remit.

‘Modern clinical trials are highly data-driven. You are selecting cancer patients based on biomarker profiles,’ said Collins. These types of specialised biomarker panels, including the patient-derived tumour assays offered by Helomics, in turn require highly flexible informatics systems that can manage, process and analyse complex data.

Helomics is overcoming this challenge by making use of its proprietary regulatory-compliant D-CHIP (dynamic clinical health insight platform) bioinformatics platform, launched in April, which applies machine learning to data from multi-omic studies in the database for insight about how patients are likely respond to drugs. Helomics has also employed Abbott Informatics’ STARLIMS for the last six years as its clinical diagnostics LIMS, and is now deploying STARLIMS to drive its CRO business.

While flexibility is key, you don’t want a LIMS so complicated it becomes impossible to maintain. ‘That sort of ‘lifeboat’ software – which has every kind of functionality for every kind of eventuality – becomes a beast to maintain,’ Collins said. ‘Sometimes a more focused solution can work better.’

‘We want to be able to set up and configure new projects within days, not weeks, and have role-based permissions that guarantee client data security and meet regulatory guidelines and rules we are subject to, such as CLIA. As a boutique CROs, we need software that is project centric to manage diverse and often complex workflows for individual clients, which is very different to the sample-centric structure of a traditional LIMS. You want to be able to keep all your project data together, and easily accessible, and available to the sponsor, in the correct format, in just about real time,’ commented Robert Montgomery, Helomics manager of IT and LIMS.

Pulling it all together

Managing data is not a new concept for LIMS and ELN users but today’s predictive or translational research projects require specific infrastructure that can effectively manage data on a scale not seen before.

Next-generation sequencing or predictive modelling requires more than just storing data, as seemingly disparate data streams need to be analysed together in order to provide true value to the user.

‘For predictive testing based on next-generation sequencing you need to have a database that can map patients’ histories, possibly in context with those of their families, including children, siblings, parents and grandparents,’ explained Lisa-Jean Clifford, CEO at Psyche Systems. 

‘Sophisticated algorithms are used to analyse all of this information and identify the best course of treatment going forward. It’s a huge ask for a laboratory information system (LIS) to be able to handle and coordinate experimental/analytical and complex data workflows and reporting requirements,’ added Clifford.

Molecular diagnostic assays enable anatomic pathology laboratories to combine clinical and anatomic pathology diagnostics with a suite of molecular analyses. This enables speciality diagnostic laboratories to emerge that focus on just one, or a few, types of highly complex assays and technologies, such as next-generation sequencing, or predictive modelling.

But whatever their specialisation, all diagnostic laboratories will have some key fundamental requirements in common, Clifford noted. ‘Underpinning every LIS will be a discrete database that lets you mine and compare disparate data, so that you can derive maximum value from that data.

‘You also need seamless integration with instruments and other software, and a test ordering, scheduling and reporting system that can handle multiple types of workflow. A key requirement is the ability to integrate in a bidirectional manner between applications, and between instruments and applications, as well as output human-readable data. And on top of that, your data flow must be managed and communicated in a compliant, automated fashion,’ Clifford concluded.



Topics

Read more about:

Laboratory informatics

Media Partners