Food firm benefits from information system
The R&D laboratory of a large European food manufacturer, with dozens of international subsidiaries, was preparing to move into a new research facility. Its research was minimally automated, and the company was running several rudimentary, home-grown, information management systems. The company's biologists were totally unaware of how much a Laboratory Information Management System (LIMS) could aid in their proteomics research.
The laboratory conducts experiments to improve the taste, texture, and shelf-life of the company's products; much of its research concerns the molecular processes in fermentation. The challenge in automating this, as in any proteomics laboratory, was to manage the extremely high volume of samples and provide access to all aspects of the data generation process, to enable accurate and reproducible analysis of the data.
Central to understanding fermentation is the fate of the proteins involved - in this case, which ones are involved in the formation of dairy products. A single changed protein may transform a fine, smooth yoghurt into a thick cheese - and vice versa. The food R&D division had requirements common among protein researchers: bioinformatics, data storage and manipulation, 2D gel electrophoresis (from spotting to picking), transcriptomics, mass spec files handling, instrument integration, limiting manual error by increasing automation, and compliance with United States federal and FDA (Food and Drug Administration) regulations, since some of their production is exported to the USA.
Unlike many standard laboratories, this one operated in R&D mode on one side of the building, passing any significant findings over into production. This transfer of R&D to production, which is not typically done in such a short time, necessitated a novel design for the information management system.
Information management with instrument, analytics and third-party integration
Managing proteomic information involves collecting, indexing, searching, and analysing data generated by a wide variety of instruments, robotics, and software platforms. An automation system must support various third-party packages, including search engines, biological annotation tools, and analytics, and must handle project, study, and experiment management, as well as reporting, in a consistent, logical manner. To pull together the formatted and unformatted data, a unified interface is the key to efficiency and traceability. Figure 1, below, shows the modular makeup of one such tool - the Sapphire Proteomics Accelerator.
Understanding the science behind the experiments
Association of instrument-derived data with analytical and biologically-relevant infor-mation requires a system that understands the science behind the experiments. Many of the complex methods used in proteomics today only compound the problem of making annotations to samples, which may be several iterations away from the original sample because of a variety of extractions, splits, and fractions from the master sample. This has become even more relevant when tracking the extraction of data from image files, which form complex multidimensional arrays of data from singular samples. The resulting analysis often predicts the fate of several samples that are related to the image. Scientists are often found saving precious gels run with limited sample materials, and comparing them against previously run gels, only to go back and re-pick new sub-samples (spots). Figure 2, below, is an example of a typical process flow for a proteomics sample.
A typical proteomics laboratory workflow generates the need for links to many external files, in a multitude of formats, most of which need to be managed in real time. Sapphire was used to automate the manual process, and enable tracking of real-time data acquisition, virtually eliminating the need for manual intervention. Regulatory requirements have complicated the issue by requiring sophisticated systems for managing this information, based on user privileges/roles and event auditing, while still keeping the data accessible to the scientist.
Proteomics data management and automation solutions need to provide flexibility with respect to instrument interchange, and recognise the concept of the biological 'enterprise', with respect to upstream and downstream processes (sequencing and genomics for upstream, and transcriptomics for downstream). In order to manage modern laboratories, which typically are not tied to one instrument vendor, it is necessary to integrate a wide variety of different platforms simply and rapidly.
Projects, studies and experiments
The R&D division contacted LabVantage about its Sapphire Proteomics Accelerator, a modular LIMS system that includes 'plug and play' solutions to these common problems. The system helps researchers structure and execute projects, studies, requests, experiments, analyses, and static data. Today, the entire company is benefiting from more structured experiments, and more exacting results.
The food R&D division needed to enforce global objectives. Sapphire was able to meet this need by providing certain security features. For example, a manager can specify that only particular users can create or edit studies and experiments, or that only authorised personnel can close or complete a project.
The 'study level' provided a more local perspective. With many features similar to those on the project level, studies are able to provide additional detail on how to accomplish the objectives outlined in a given project. Although it is not normally required, Sapphire can connect to legacy systems used to manage studies. Security at the study level is similar to that of the project. This laboratory also required a request layer in order to communicate with its customers - primarily internal customers, who need research conducted on a product within a short time frame. The system requires minimal information input from the customer, yet provides real-time feedback on the status of the request.
A major issue was experimental and analytical design. A platform that allows users to visually design their experiments enables the users to add 'methods' that outline the steps required for that particular experiment. Each of these steps can be integrated with instruments and robots, allowing direct transfer of real-time, bi-directional data in or out of the LIMS system. The system enables users to specify file formats, so files can be produced for many different instruments. This overall experimental framework allows the system to keep track of the experiment, while allowing the user to visually observe its progress. The user can create methods, and save and re-use those found to be useful for the next experiment.
Figure 3, below, is an example of the experimental design and management tool provided with the Sapphire Proteomics Accelerator. Sapphire can be configured to address the needs of a wide variety of users, although it was designed to support the managerial point of view. Hence, the LIMS system can produce reports describing all levels, steps, and data within a project/study, and move down through a variety of levels to provide an accurate, up-to-date view of what is transpiring in the laboratory. This type of enterprise genealogy would be difficult, at best, to produce manually.
Handling numerous sample formats
A principal need in the research environment is the support of numerous sample formats, such as micro-titre plates, racks of tubes, and microarray chips, as well as support for the allowance of transfer between these formats, while managing large volumes of the samples/materials. In addition to providing this functionality, Sapphire can replay all aspects of a scientific process that was managed, enabling pinpoint and accurate understanding of how to repeat the experiment.
The proteomics LIMS provides platform-independent data access, traceability, and integration from sample collection to protein result storage. Scientists can use individual workflow stages within the overall process flow to meet their own requirements. Sample data can be combined with data from public or proprietary databases, allowing better analysis, and understanding of the scientific hypothesis being asked of the laboratory. In addition, integration with 'best of breed' proteomics tools - such as gel image analysis, spot picking, instrument and robotics integration platforms, and web-based search engines - enable scientists to transform data into meaningful information in the way in which they feel most comfortable.
In this example, Sapphire proteomics LIMS streamlined data flow within the organisation, and centralised the information into one primary database. The LIMS unifies vast and disparate volumes of biological and chemical data, along with their related applications and tools, into a single, browser-based, scientific interface. Built on an extensible life-science data model, the platform understands the context of data being integrated and the relationships between associated data. It enables scientists to use the platform to query, view and analyse research data, without being required to reformat their data-gathering methodology, or change to multiple product interfaces. The platform also includes a state-driven workflow engine for automating any process. All data deposited into the LIMS database can be disbursed through electronic or hardcopy reports and queries. Thus, the LIMS becomes a comprehensive enterprise solution that enables research organis-ations to focus time and effort on their true mission: the production of scientific information and better food.
J. Kelly Ganjei is Global Life Science Product Manager, LabVantage Solutions, Inc; Terry Smallmon is Director of Informatics, LabVantage Solutions, Inc; and Charles Lee is Application Programmer, LabVantage Solutions, Inc.