Skip to main content

Perfecting pills and potions

‘You are here to learn the subtle science and exact art of potion making... the delicate power of liquids that creep through human veins, bewitching the mind, ensnaring the senses... I can teach you how to bottle fame, brew glory, even stopper death.’[1]

J K Rowling’s potions master, Professor Severus Snape, is talking of magic but his approach is in many ways more scientific (‘there is little foolish wand waving here,’ he warns) than muggle explanations for drug effects which were accepted professional consensus until quite recently in the scheme of things. In anything like the present day usage of the term, pharmacology is (give or take an argument or two over detail) one hundred and sixty five years old. For much of that short history, pharmacological data analysis was externally observational: the body was a black box, with the drug as input and detected effects as outputs. The current receptor-based molecular approach, though it has its theoretical roots in the early 20th century, only really dates from after the Second World War.

Though now an extensively differentiated field with numerous sub-specialisms, broadly speaking it splits down into how drugs enter and leave the body and exactly what they do while they are in there. The first is the concern of pharmacokinetics, while the second is covered by pharmacodynamics.

Drug discovery is not the only focus. Existing agents are examined for previously unknown effects, desirable or otherwise, whose identification may offer new benefits. New foodstuffs, or modifications of old ones, need to be investigated too – especially at a time when science is providing us with new means of creating them. New discoveries reveal the role of previously unsuspected agents in apparently unrelated problems. Cosmetics (and even fragrances which are never applied directly to the body) interact with human systems in ways which must be investigated for toxic and allergenic reactions. Even changes in the balance of well-known nutrients, from fats to vitamins, can throw up effects for which medicine, industry and society need to be prepared. 

Every substance introduced into a living organism has a range of effects of which some are beneficial, some are welfare neutral and some are toxic. Though we most commonly think in terms of elective imports to human or other animal metabolisms, this applies even to fundamental essentials such as oxygen (see box: Oxygene) and to any organism.

Substances are classified, pharmacologically, by the ratio of therapeutic effects to toxic. The larger that ratio is, the better: a ratio of one indicating toxicity equal to benefit, a ratio of five being seen as a useful. The lower the ratio, the more important it is to ensure localised delivery and rapid breakdown. But these ratios vary from individual to individual and only a statistical estimate can be applied to populations. Furthermore, substances do not operate in isolation; they, and their pharmacologies, interact with each other and with their environment. Then there are supplementary substances which may be used in an attempt to increase the ration by enhancing therapeutic and buffering or reducing toxic effects, each with their own pharmacologies which add further to the complexity. Well-defined models, and soundly-based data analytic tests against which these descriptors can be tested, are therefore vital in extracting meaningful, useful and reliable findings.

A major focus of pharmacological investigation is drug discovery for medical intervention, and here another imperative behind careful statistical treatment of pharmacological data arises from their high ethical and economic costs. Whatever one’s attitude to animal and human tests, there is (the occasional monomaniac tyranny aside) broad social consensus that their numbers should not exceed absolute necessity, especially given that every human trial potentially puts the welfare of a volunteer at risk. Such work is also expensive in financial terms and both costs are multiplied several thousand fold by the low proportion of programmes which eventually lead to a usable outcome. It therefore becomes a moral and fiscal priority to extract the maximum knowledge from the smallest possible number of experiments by efficiently designed data generation and utilisation strategies.

Computerised software for pharmacological work comes in the same sort of variety as in other fields. Generic products can, of course, be used – and often are, though an increasing number of them are supplying purpose designed functions. Others are designed to be life sciences oriented and there is the same mix of commercial, free and open-source solutions now familiar from every area of scientific computing.

Unlike some market segments (office suites, for example, and LIMS which I’ll mention later) data analytic software within an organisation is often not monolithic throughout. Visiting and talking to people in preparation of this article I’ve seen many examples of one package being used at management level, others at laboratory level and several more on individual desktops. In other cases, the variation is between one site and another. The crucial issue is whether data and analytic results can travel transparently back and forth across software and organisational boundaries. Given the wide range of import and export file format options available in most statistics products these days, it’s usually possible to find a lingua franca which allows everyone their favourite tools within an overall system.

An example of the free-to-use, dedicated sector is InVivoStat, the name of which gives clear notice of its purpose and focus: ‘designed specifically for researchers... where exploiting the experimental design is crucial for reliable statistical analyses’, to quote a comparative review[2] by five such researchers at the end of 2011. Examining the package shows the provided tools to be a subset chosen from those available in generic packages, with selection and presentation being the key to specificity.

General statistics packages increasingly include macros or wizards to cater for particular expectations of field-specific users. These routines serve as personalised guides through the maze of available methods to the features with which a researcher (in this case a pharmacologist) is familiar, then present a particular interface for control of those features, without altering or obscuring the generality of the product itself. Other products leave the user to understand that their specifics are simply the application of general statistical principles. The manual for Unistat, a general statistics package popular with life science researchers, adopts the halfway-house approach of providing discipline contextualised examples, such as using multiple dose response curves to demonstrate nonlinear logistic regression.

SPSS, with its heritage firmly fixed in the sciences, is a popular choice that crops up in a wide spread of contexts. Software with an emphasis on plotting or other visualisation approaches is widely used. GraphPad’s Prism, for instance, recurs in several studies, such as an investigation[3] of pharmacological calcium channel blocking to reduce chronic pain in sufferers irritable bowel syndrome. The objective here, in a set of experiments using rodents, was to compare ionic conductance contributing to neuronal firing to identify most likely analgesic approaches; difference analyses were therefore central.

Packages with a life science background to their evolution are obviously going to be popular in this area, an assumption borne out by plaudits for VSNi’s GenStat (see box: What’s a mother to do? For example). Given the immense volumes of data generated by some pharmacological studies, they are an obvious candidate for data mining. Statsoft’s Statistica is well represented in its own right (as in the Time and tide box), but its Data Miner module is also found in several programmes which seek to reuse research by harvesting existing and ever-growing databases of past results.

As Thermo Fisher’s Trish Meek sketches out (see box: Straight down the LIM), the blizzard of data which arises from the successive phases of pharmacological investigations needs to be contained and managed. ELN (Electronic Laboratory Notebook) and LIMS (Laboratory Information Management System) software are these days essential to data analysis in many areas of science – a trend of which pharmacology is a prime example. Such systems are as much of a specialism as pharmacology itself and the expertise which their suppliers amass is similarly so.

Though LIMS are sold by numerous suppliers at various scales and there is at least one inventorying example – Quartzy, run on a free to use bottom up basis – those which can cope with the pharmacological volume of major investment programmes’ expertise come from a few suppliers. The Watson LIMS mentioned by Meek, for instance, has been adopted by a substantial majority of the world’s largest pharmaceutical laboratories.

In common with most areas of data management, there is currently a move towards distributed cloud approaches although, given the investment levels often involved, security remains an issue. Provider Core LIMS makes a point of emphasising that its modular solutions are entirely web-based, accessible from wherever the client has authorised users, but can either be installed on a client’s in-situ local servers or remotely cloud hosted.



OXYGENE

Therapeutic interventions are what usually spring to mind when the word pharmacology is mentioned, but they cannot be meaningfully analysed without reference to their environment.

As a fundamental component of animal metabolism, ever present in every context, oxygen is also a primary object of pharmacological attention – especially in its most reactive forms which play many crucial roles, particularly in association with kinases and in relation to genetic expression or deletion effects.

Comparative statistical analysis of results from experiments investigating cancer preventative effects of gugulipid (a natural extract from a plant used in traditional Indian medicine) on prostate cells showed[4] that beneficial proapoptotic and angiogenesis suppression effects were dependent upon levels of c-Jun N-terminal kinase. Initiation of this regulatory effect, however, was in turn dependent upon available levels of reactive oxygen species (ROS) – a connection hypothesised from analyses of previous experimental data on apoptotic kinases and built into the study as a result. This is one of many studies pointing to potential mechanisms for the suppression of cancers.

In plants, designed experiments linked to well-structured data analysis show that ROS suppression by negative feedback loops affects lignin production for repair processes[5] and ROS-related responses to oral insect secretions[6] during predation.

ROS have many damaging effects, however, and in particular are implicated with another kinase in heart failure. Analysis of recent data[7] has shown, for example, a ROS link to sodium and calcium overload in the muscle cells.

Adopting a three-way data analytic approach to high levels of intravenously administered ascorbate (for example, experiments[8] by Levne and others at NIH Bethesda) has opened up new pharmacodynamic knowledge of hydroxide decomposition mechanisms and, again, impact on cancerous cells.

 

Straight down the LIM

 

Trish Meek, director of Product Strategy, Life Sciences, at Thermo Fisher Scientific, comments:

‘Understanding how the body metabolises drugs and ensuring the safety and efficacy of not only the initial drug substance, but also all of its metabolites is a critical step in the drug development process. The key is elimination of poor candidates as early as possible in the process, ensuring that only the strongest candidates reach clinical trials.

‘Today, this work often begins with in silico testing. Once the computer models identify a strong candidate, in vitro work can begin. ADME/Tox (Absorption, Distribution, Metabolism, Excretion and Toxicology) has permitted researchers to eliminate poor candidates earlier in the development process. The trade-off is that in vitro testing has created a deluge of data and required pharmaceutical companies to look at how they handle throughput, storage and analysis. An informatics solution, like a LIMS, is critical to the success of these laboratories as they enable companies to increase their throughput and decrease overall costs by automating the ADME process from initial data acquisition through analysis and review to the ultimate acceptance of the data. By implementing a data management system, one of our global pharmaceutical LIMS customers was able to increase their Tier 1 ADME compounds screening rate to more than 2,000 compounds a week.

‘Following in vitro testing, in vivo animal studies are conducted to determine which candidates should proceed to clinical trial. Bioanalytical labs run studies, performing drug metabolism (DM) and Pharmacokinetics/Pharmacodynamics (PK/PD) analysis on the samples to determine their profile in the human body and ensure good drug clearance, safety, and efficacy in the test animals.

‘Again, data management is critical for managing the study design and execution to ensure that runs are performed correctly, and to determine the final results to submit to the FDA in the new drug application. A good LIMS system, like our Watson, promotes good practice and effective study management.’

Topics

Read more about:

Modelling & simulation

Media Partners