Disruptive technologies in informatics
In his final report from the Paperless Lab Academy, Tom Wilkie discusses the role of disruptive technologies such as cloud and big data
The dominant theme at the Paperless Lab Academy in Barcelona on 14 and 15 April was the way in which the pharmaceutical industry was outsourcing so many of its operations – and the implications of this major change for informatics, as reported in Out with the old and out with the internal, the first article in this series. There are risks of Lost knowledge and dead data associated with this strategy, as recounted in the second article.
The issue of disruptive technologies – specifically the cloud and to a lesser extent, big data – was not quite so up-front during the PLA, but it kept breaking through. The two tended to be regarded as two sides of one coin -- as if the cloud were the only technology capable of dealing with the challenges presented by big data. Curiously, mobile devices such as tablets and PDAs featured frequently; these appear to have become accepted as part of the laboratory landscape and are not regarded as disruptive.
Some delegates were clearly uneasy about hitching themselves to the cloud due to concerns over data security and the protection both of intellectual property and of personal information. But, as the PLA progressed, it became clear that, in a world of externalised pharma companies, the lack of standardisation of data formats means that even with current technologies companies are losing knowledge and are unable to extract the maximum value from their data.
The disparate threads of outsourcing, maintaining in-house knowledge and quality control, and disruptive technologies – both the cloud and big data – were brought together neatly in the presentation from Nicolas Goffard. He is bioinformatics platform manager for the French start-up biotechnology company Enterome Bioscience, which was spun out of Inria, the French public research agency dedicated to computational sciences, in 2012.
Enterome Bioscience is looking for bacterial genes specific to disease and is sorting through human faeces to find them. The aim is to improve treatment of chronic metabolic, gastrointestinal, and autoimmune diseases. Although patients may have a genetic susceptibility, the conditions are also triggered by environmental factors including an imbalance in the intestinal bacterial ecosystem.
But as a start-up, Enterome has to outsource its sample collection and analysis, and has to send data out for biostatistics and bioinformatics analysis. According to Goffard, however, it is still vital, under whatever ‘virtualisation’ model might be adopted, to keep quality standards high even despite the pressures of trying to achieve results rapidly. Enterome reached for a cloud solution in the form of Core Informatics’ LIMS to manage the data collection, the data generation, and the results of the data analysis. Enterome outsources the process of collecting the stool samples and then processing them to extract material suitable for analysis; the DNA extraction and sequencing is the next step; and the final, added-value step is the application of bioinformatics and biostatistics methods to analyse the data. It opted for the Core LIMS web-based product, together with Pipeline Pilot, he said, because the system had to be highly configurable to support new workflows and new partners.
For Goffard, concerns about data security in the cloud did not figure highly.‘We didn’t have any trouble using the cloud, Goffard said. ‘It’s secure.’
This is the final report from the Paperless Lab Academy that took place in Barcelona on 14 and 15 April. The first report Out with the old and out with the internal discussed the implications for the informatics industry of large LIMS systems coming to the end of their lifetimes and of the phamaceutical industry's drive to outsource so much of its activity. The problems of Lost knowledge and dead data that can follow from this outsourcing were discussed in the second article in the series.