Skip to main content

Where's the beef?

In January 2013, it was discovered that supermarkets within the United Kingdom had been selling food products that were labelled as beef but actually contained varying degrees of undeclared horse meat – up to 100 per cent in some cases. The fact that these products had entered the food chain cast a public eye on both traceability and testing procedures within the food and drink industry.

When it later emerged that the adulteration went beyond the UK’s borders and was in fact symptomatic of a wider issue throughout Europe, it called the industry’s supply chains into question and prompted a tightening of European Commission requirements regarding sampling and testing for the presence of equine DNA in products where beef is listed as the primary ingredient. Further testing for the presence of phenylbutazone, a nonsteroidal anti-inflammatory drug (NSAID) for the short-term treatment of fever and pain in animals, was also ordered. Adding to the scandal, it was later revealed that traces of pig DNA had been detected in beef products, having implications for individuals who refrain from eating such meat on religious grounds.

This is, of course, not the first case of the adulteration of foodstuffs to come to light in recent years – one significant example being the presence of melamine in milk and infant formula in China in 2008. The need to comply with the regulations born out of health and safety concerns is clear, but beyond that is another issue. Consumer confidence is a fragile thing and, with incidents of adulteration continuing to occur, the industry is forced to examine its processes in finite detail. Consumers need assurance that what a company says is in a product is what is present. So what steps can companies take, and where do informatics solutions fit in? 

Testing approaches

The difficulty in identifying and preventing occurrences such as the ones detailed above comes down to nature of the testing being conducted within food and drink labs – namely, that there was no expectation that these substances would be found in their respective products. Previously a food safety regulator in Northern Ireland, Dr Paul Young is senior director of food and environment business operations at Waters. He explained that, until relatively recently, the testing approach has been one of targeted screening – whereby scientists set up a method to identify certain organisms or compounds, such as pathogens, pesticides or veterinary drug residue in products of animal origin, and at what concentrations they are present. Should a sample exceed those tolerances, it is rejected.

Dr Young said: ‘In the case of the presence of melamine, it was so unexpected that it should ever be present in food that the industry had to shift its focus. The questions became ones of how could we increase the breadth of our screening, how do we look for much larger numbers of the things we expect, and how do we screen for substances that we would never imagine could be present? Answering these is an incredible informatics challenge.’

One option is the use of multivariate analysis of time-of-flight mass spectrometry (ToF)-generated data. Models are built so that known food samples can be used as a reference. Informatics software process the data generated from that reference and identify whether there is a significant difference in the pattern of the analytes in the sample being tested. Ultimately, if the incoming sample looks different, the next challenge is to identify what makes it different. ‘That’s the Holy Grail and where the industry is trying to get to – easy mechanisms for screening incoming ingredients against ones we know to be good,’ said Young. ‘We can then identify material that differs and, at that point, reject the sample or do further analysis on it. If we’d had the ability to do that back in 2007, perhaps melamine might have been identified much earlier.’

Waters’ solutions focus specifically on the detection of chemicals. As many disinfectant by-products tend to be halogenated – i.e. contain a chlorine or bromine – the company has built tools into its Unifi software that enable users to identify any substance that may be halogenated. Measurements can be made on a time-of-flight mass spectrometer and users can process those short run times for hundreds, or even thousands, of chemicals. This is often referred to as non-targeted screening. According to Young, the biggest challenge facing scientists who are doing this type of screening is managing the data. The main issue is how to process the significant amount of data that an instrument generates around a complex food matrix. ‘The entire food industry is struggling with this at the moment,’ said Young. 

Data, data, everywhere

Effective data management is fundamental in providing traceability throughout the laboratory. Young pointed out: ‘If you can’t be assured that the result you’ve generated is linked directly back to the incoming sample, the result itself is worthless.’ When a sample is received, it may or may not be barcoded, but the information relating to it will typically be entered into the LIMS, which will assign a reference number to that sample. That reference number and all associated test requests will be generated on a sample work list by the LIMS. At this point, it becomes a manual process as the operator prints the list, retrieves the sample and carries out the required tests.

‘Although many checks and balances are in place, this step is largely outside computer control as it’s being done on paper,’ said Young. ‘The sample is processed, taken to the instrument, the test is scheduled, the result generated, calculations are performed and ultimately, once everything has been checked by a supervisor, the results are manually entered into the LIMS.’ He added that, as manual processes leave room for transcription errors to creep in, people are striving to complete the circle so that the LIMS can be connected directly to the instruments, eliminating the need for human intervention.

Bill Gordon, VP of business development at KineMatik, agreed that to meet the challenges of operational complexity and regulatory compliance it is becoming critical to adopt a substantially enhanced level of information integration across systems, functions, and the extended supply chain – moving from research and development through sourcing, manufacturing, storage and distribution, and consumer interaction. ‘No matter what type of food and drink-related business you run – from farming to a successful multinational product brand – you need to marshal your information and resources to improve performance around five common objectives: governance and compliance, internal and external communication, efficient document access, recall management, and auditable and defensible information.

The internal and external communication of research and results can bring their own set of difficulties. Mariana Vaschetto, VP for marketing at Dotmatics, a global informatics solution and service provider, said the problem lies in the fact that data is often being exchanged via email, Excel spreadsheets and shared drives – or, in some cases, it is not happening at all. Critically, data that is locked in paper notebooks or a spreadsheet on a researcher’s computer is neither sharable nor searchable. ‘Many food and drink research organisations are now contemplating or already going through the process of updating their practices from pen and paper to an electronic system that allows the easy archival and retrieval of information,’ said Vaschetto. 

Optimising the lab

Given the complexity involved in food and drink laboratories, companies like Accelrys have created scientific maturity models that are designed to help organisations understand how to optimise their workflows and best distribute their people, processes, tools and technology. Deployed on the Accelrys website 12 months ago, the tool begins by presenting people with a 12-question assessment to determine how well their companies perform in each of the four areas, and how they compare to industry peers. Gaps are then quickly identified.

By effectively managing the entire scientific lifecycle, companies are able to demonstrate a continuous thread of tracking and traceability based on the scientific decision process. Ted Pawela, senior director of materials science and engineering solutions at Accelrys, explained: ‘People will not only be able to know precisely what is in each product, but can understand the basis on which decisions were made regarding its composition. In an electronically enabled way they are able to track back from the manufacturing process to the testing of samples, research and development, and experimentation.’ The use of electronic signatures also ensures that accountability is recorded at each stage of the process.

Waters is another company focused on removing the need for user intervention or transcription. Launched in 2012, the company’s informatics suite, NuGenesis 8, creates an electronic worksheet (implemented in the electronic laboratory notebook) that leads users through the prescribed workflow, ensuring they complete every step and verifying that all input meets established criteria.

Rather than replacing a LIMS, it is designed to plug the informatics gap by automating the laboratory’s processes. Data is captured from the LIMS, catalogued and manipulated through the sample prep. The software then works with the instruments to schedule tests; once tasks are completed they are submitted for approval. The results are then automatically shared with business systems such as the LIMS and ERP.

Waters’ Paul Young explained that NuGenesis 8 contains a fair number of technical controls to ensure that testing results are authentic and trustworthy. Log-ins and passwords are required, while audit trails ensure that, once data is collected, users can follow the sample through the entire process. Relational databases then filter results, ensuring that users are only looking at critical data.

The Dotmatics platform has the ability to import many diverse types of files, store them and make them available for searches and analysis. This feature facilitates the integration of experimental results and ad-hoc analysis within a central repository of data. According to Mariana Vaschetto, users can add experimental data from multiple sources after just one hour of training. The Dotmatics Platform also allows users and administrators to easily upload and manage historical data stored in spreadsheets, documents and images, and connect all the disjointed accumulated data into a single database.

Vaschetto added that instrument data can be collected instantly by the ELN without the need to print it out, and then cut and paste the data output. Experimental templates can be built and shared between researchers to standardise experimental protocols and streamline reporting. Additionally, sample requests can be sent to the lab and results will be automatically populated back into the system for searching and analysis.

One key point of note, added Accelrys’ Ted Pawela, is that new technologies and software solutions do not, and should not, comprise the whole picture. Instead, food and drink companies need to become more transparent about how they deploy all aspects of scientific rigour.

‘It’s important that we show we understand where we are strong, where we are weak, and that we are applying resources and investment to the areas that are in need of improvement,’ he said.

‘Demonstrating the ability to measure and report on those elements is critical and can only be achieved when combining technology with process – otherwise there is just simply too much data.

‘Ultimately, companies need to provide tangible evidence that goes beyond marketing rhetoric to reassure consumers that they are taking action to ensure that incidents of adulteration cannot and will not happen again.’



Adulteration is not confined to Europe. Bill Gordon, VP of business development at KineMatik, comments on US requirements

The industry requirements have changed significantly in recent years. For example, US product recalls have increased in number, complexity, and severity – leading governments to substantially update regulations around food safety. The recent Food Safety Modernization Act (FSMA) introduces significant new rules for prevention, inspection and compliance, imported food safety, recall response requirements, and collaboration within food safety agencies. To consistently comply with these regulations, food and drink companies must enhance their ability to coordinate food testing and safety-related information across the supply chain, and retrieve that information on demand when preventative or responsive action is required, or when the Food and Drug Administration (FDA) requests access.

In the past, it has simply taken too long to address safety and recall issues and trace food contaminants, and the FDA is serious about change. Beyond the FSMA itself, the FDA is conducting two extensive studies designed to see what methods can most quickly trace a variety of foods back to a common source of contamination. It’s clear that fast access to accurate information will be a critical part of such processes once they are practically applied in the industry. This means that companies must develop both accountability and auditability throughout their own processes. They must not only manage information and records efficiently; they must also ensure there’s a clear electronic information trail linking all relevant information, allowing it to be recalled, correlated, and distributed in near-real time.

René Blok, head of the analytical laboratory at IOI Loders Croklaan Europe, describes his laboratory’s informatics set-up

IOI Loders Croklaan Europe implemented its first laboratory information management system (LIMS) in 1990, and introduced Seimen’s Simatic IT Unilab, a multi-lab, multi-language enterprise LIMS for QC, service and R&D labs, in 1999. The main goal was to implement a system that could host the huge quantities of complex analytical data generated during the production processes, and at the same time offer the possibility to connect the analytical instrumentation. The LIMS handles the storage and availability of analytical data produced during the operations, the supply chains, and R&D activities – as well as the control samples for the regulation of the analytical process. This has the added advantage of making life easier for the analysts, and more pleasant by reducing manual data handling (both tedious and error-prone) with the advantage of creating a high level of traceability.

The data volumes resulting from all analyses are considerable: approximately 100,000 samples per year, with an average of seven parameters per sample. Samples are taken in every phase of the process (raw material and in-process samples, through to the finished product delivery samples), and analyses from the R&D department are included. All resulting analytical data is then aggregated in Unilab. If an out-of-specification result is registered during the analysis, Unilab feeds that information back to both the MES (manufacturing execution system) and SAP (systems application programing).

When the SAP was first introduced, the IT landscape could be largely defined as follows: SAP as the system for master data, and Unilab LIMS as the system for analytical data. The SAP also contains customer information, specifications and analysis plans, while reporting and data collation is done via the LIMS. To keep track of the large amount of data the system works with, group keys give access to the right data in the database, based on the choice of the operator (period, location, type of sample, etc.). Reports can be generated in a flexible and ad-hoc manner with the reporting server (based on business objects). The automatic time- or event- based generation and dispatch of reports is also something that is scheduled and defined upfront, which means it can be done by the system at night, without further human involvement.

The LIMS is also used for the generation of worksheets. These tools provide barcode labels for further processing, and instructions on what analysis needs to be carried out. For the sake of traceability equipment is connected as much as possible, and everything and everyone involved in the analysis is barcode-scanned (the analyst, the instrument, the production station, the type of analysis, and the sample). This not only offers a multi-faceted view of the analytical data, it makes the system easy to use and promotes its adoption.

Analytical processes are automated where possible. Some analytical processes are generated by robotic systems, according to the analysis plans, and are based on official standard operating procedures. Since no human intervention is required, this can run at night without the need for a lab technician. Results will be automatically transferred to the LIMS, after which people in the lab and the plant can access and drill the data at any time.

Looking to the future, it will become increasingly important to have an IT landscape in place that can accommodate changes without major customisations that clog the system and stand in the way of future flexibility. With this in mind, Loders is considering the deployment of a full web interface, which will have the added advantage of providing overseas plants with the ability to access the LIMS.

Topics

Read more about:

Laboratory informatics

Media Partners