Skip to main content

Data-driven drug development

Drug development is facing change – both from technological pressures, such as the use of AI and machine learning, plus new regulations which are driving sweeping changes to the way electronic records are created and stored for clinical trials.

Informatics software providers are now beginning to provide software which can help to facilitate this change, but also manage and derive insights from the vast quantities of data that is being recorded.

Semantic search, metadata, indexing and more sophisticated software packages for predictive analytics, and even AI and machine learning approaches, are being used to fill the gap between traditional methods and the data-driven paradigms that are being used in today’s laboratories and research centres, as well as pharma biotech and contract research companies.

The need to control and derive insights from data is becoming crucial in order to maintain a competitive advantage. This is driving a focus on data exchange, digital transformation and analytical capabilities that can help to provide more value from data and, at the same time, help organisations to work within more stringent regulations.

Andrew Anderson, vice president, innovation and informatics strategy at ACD/Labs, said: ‘My training just before taking this role was in looking at industrial insights, and translating the innovation plans that come from those insights. What I see in drug development is that, if you go back 20 years when a new drug was matriculating through a clinical process, there would be an asset investment based on the probability or likelihood of what the rate of approval might be.’

This approach required pharma companies to create major assets, such as manufacturing plants that companies could use to take raw material and create a finished product. However, over time and with costs and attrition rates rising pharma companies began to move towards a more risk conscious model. This relies on outsourcing different aspects of the drug development process, in order to reduce the risk and the level of investment required.

The cost of externalisation

‘Over the last 20 years another trend that we are all aware of is attrition, or what I would call unanticipated attrition. In a lot of ways those specialised assets – to make a particular compound from manufacturing, formulation, distribution – can be completely unusable if they are so specialised that they are built for a specific purpose. There has been an increasing interest in outsourcing a supply chain, and that has been realised fairly effectively across a variety of industries, not just pharma but other industries as well,’ said Anderson.

‘But with that asset flexibility, being able to hire CMOs for certain unit operations in a manufacturing process, or a variety of CMOs to provide different functions, such as manufacturing or quality assessment, I have even recently heard of outsourcing compliance, regulatory compliance, in particular,’  he added.

One thing that Anderson says could be very useful in overcoming the challenges in maintaining control of data in the face of an increasingly outsourced business model, is data exchange. ‘Data access points can be limited, just based on the nature of the data that is being acquired, the effort to summarise that data, the ability to take those summaries and interpret the data to make effective decisions.’

David Wang, general manager of informatics at Perkin Elmer, provided some details on the regulations that are affecting drug development processes. ‘On the clinical side there [have been a] whole set of regulations that I think started in the European Union.

‘The regulations, known as ICH E6(R2), are a set of guidelines that aim to significantly increase the probability that pharmaceutical and biological manufacturers would be capable of delivering strong analytics, along with their clinical trials. The regulations look at the processes of recording and managing data through electronic means when carrying out clinical trials.

‘The reason I bring that up is even though that was published in 2016, Europe enacted it in full force during 2017, and the US in 2018. One effect is that the major manufacturers we are working with want to get a much more sophisticated analytical understanding of their clinical data than previously,’ said Wang.

He noted that externalisation and outsourcing meant that data was not always easily accessible by the company that is creating the new drug. In the past, some critical data may be held by CROs. ‘As the number of expensive biologics or more narrow indication drugs hit the market, the regulatory agencies are rightfully advocating for patients and asking for greater transparency,’ said Wang.

Whereas in the past it was good enough to say that a treatment helped to solve a problem in a percentage of a given population, now regulators want pharma and biotech companies to be able to provide more contextual information, such as the subset of people who were affected, and what characteristics meant that the treatment proved effective.

Wang noted that in the past the organisation ‘may not immediately have that information at their fingertips’. They might have to check with the CRO that performed that research, and the regulatory agency may have to wait a week or two. Today, that is not seen as a sufficiently rapid response for consumers. ‘I think it is a very positive development that the regulatory authorities are now asking pharmaceutical and biotech companies to have a direct hand in understanding the details and context of clinical trials data for patients,’ said Wang.

The next challenge is the fact that usually you will have a portfolio of clinical projects, central suppliers, manufactured products or approved products that correspond to potential options to investors in your virtualised supply chain.

In addition to data exchange, Anderson also noted that digital transformation would be a key step to better controlling and deriving value from data created in drug development. Abstracting data generated at a manufacturers site can make it very difficult to retain context and robust data management practices. Creating fully-digital pipelines to feed data back to the organisation helps to meet regulation and competition challenges, while also maintaining the low-risk outsourcing-based business model.

‘Being able to take data as it is generated and stream that data to decision makers directly. It needs to be stored, managed and collected in ways that allow decision makers to look across their portfolio of projects, both clinical and manufacturing projects, as well as potential suppliers, in this virtualised network of manufacturers,’ said Anderson.

While the terms may change, the sentiment that usable data must be at the heart of modern scientific processes is a sentiment shared not only by Anderson but also Jabe Wilson, consulting director, text and data analytics for Elsevier.

Wilson noted that it is not just in drug development that these changes are being felt, as Elsevier shifts to meet the demands of modern science. ‘We have been involved in not just publishing books and journals, but in creating databases and indexing content for the better part of  30 years. It is a long heritage at Elsevier, in terms of curating information and making it usable for scientists. That naturally flows into science software,’ said Wilson.

‘A lot of work that gets done is taking the fundamental research in science and adding semantics and other contextual data. Elsevier has done a lot of work around the creation of indexing software, and creating the right dictionaries and ontologies to extract information,’ added Wilson.

Wilson also commented on drug development. ‘On the other side of the fence what has been going on in drug development is a lot of the low-hanging fruit has been taken, and the diseased areas that are being looked at actually have much more complex mechanisms. Many times we have to address not one protein or gene but multiple targets.

‘Many diseases or conditions that would have been familiar in the past are actually historical terms, because disease has macro level symptoms that we would describe as disease but actually they are made up of multiple different conditions, which have different mechanisms and often you need to understand individual biological makeup – the phenotype of individuals – in order to understand how to address their disease  conditions,’ said Wilson.

‘What we are doing at Elsevier – and many people are struggling with this – is making sure that the semantic data and semantic indexing can throw its hands around all that complex data and put it in a place where people can access it. The other thing is, as well as semantic information, we are now looking at things like machine learning. Semantic data allows you to create features which you can then feed into neural networks to look for patterns in these huge amounts of data.’

The rise of AI in drug development

Wilson notes that while Elsevier may have traditionally been seen as a journal publisher, the company’s focus is shifting towards information and analytical services for science. ‘We are focusing on the business of doing the research. We are using those analytical tools to answer R&D questions.’

Wilson explained that AI and machine learning are powerful tools, but that does not mean that traditional methods should be taken for granted. ‘In drug development there are a lot of predictive analytics tools which have been used for the last couple of decades. So when people start talking about machine learning, there is a lot you can do with traditional statistical models or traditional data science.

‘However, what we are starting to see is machine learning models being used more routinely and becoming part of the arsenal of tools used by researchers. One of the challenges we do see, though, is what you might call socialising that within the work environment,’ Wilson continued.

‘Getting people comfortable with the tools and able to interpret, or have a sense of, what kind of confidence they should have in the output. Those aspects are perhaps something that we don’t talk about quite so much, but the social aspects are probably more important, in some senses, than the actual computing power,’ added Wilson.

As AI and machine learning tools become more ubiquitous they are finding their way into many facets of our daily lives – the same is becoming true of science and research. From search engines to smart speakers found in many people’s homes or even Netflix or Spotify accounts, we can now see the effect that AI is having on our daily lives.

It may take a little bit longer for these technologies to permeate into laboratory practices – particularly for highly regulated industries such as drug development, which can be more reluctant to embrace technological change.

Wilson noted we are beginning to see predictive analytics ‘where people want to get some support, in terms of how to fabricate a new chemical or understand its potential properties. Alternatively, they might want to identify what genes and proteins are associated with a specific disease condition,’ said Wilson.

‘Generally, people have much higher expectations, in terms of the specificity of the information that is going to be delivered to them. I think that is really one of the key elements to all of this, in terms of user needs, because, in a sense, AI and semantic technology should sit in the background, it should be a tool that supports the workflow teams and delivers them insights and data, but they don’t necessarily need to be able to programme neural networks themselves. They just want to get the right information at the right time,’ concluded Wilson.



Topics

Media Partners