Historically, scientists relied on the use of reference methods that have been extensively validated in interlaboratory trials to demonstrate efficacy and robustness. Implementation of this approach is not always practical or efficient, and can lead to the false assumption that the method requires no further verification in the implementing laboratory. Development and validation of reference methods is time consuming, and it is often the case that the technology employed is out of date by the time the method is even published. As a consequence, the reference methods are seldom utilized routinely, and newer, more efficient methods are often favored. This raises the question about the fitness-for-purpose of the ‘replacement methods.’ There is a widely held view that rather than prescribing analytical methods there may be more value in defining globally accepted performance criteria that must be demonstrated through robust validation to prove a method is “fit-for-purpose,” which is focus of this article.

Sources of Guidance on Method Validation
The primary objective of any analytical measurement is to generate consistently robust and accurate data. The fundamental expectations of validation are to define the analytical scope of the new method, the performance standards and provide confidence in the results via statistical evaluation. The validation process is a requirement of the majority of regulations, accreditation schemes and quality standards that impact analytical laboratories.  As a guideline, analytical methods require validation, verification or revalidation in the following circumstances: (a) prior to initial use in routine testing, (b) upon transfer to another laboratory and (c) whenever the parameters for which the method has been validated change outside of the original scope, for example, a different matrix type is introduced.

There are numerous sources providing guidance on validation procedures and performance criteria including; papers in scientific literature; guidance issued by scientific bodies such as the International Union of Pure and Applied Chemistry (IUPAC) and Eurachem; guidance from international organisations involved in the establishment of harmonized standards such as the AOAC International, the Codex Alimentarius Commission (CAC) and the European Committee for Standardisation; regulatory organisations such as the European Commission (EC) and the U.S. Food and Drug Administration (FDA) and independent expert committees such as Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives. The extent of guidelines for validation requirements provided by different organizations varies however; there is a shared common objective which is to achieve “fit for purpose” analytical results. Robust validated analytical methods have a pivotal role to play in maintaining both the quality and safety of foodstuffs entering the supply chain and ultimately safeguarding the consumer.

Method Performance Characteristics
The intended use of the method must be clearly defined prior to development, as this influences both the choice of the technology employed and the design of the validation study, for example, whether the method will be used for qualitative or semiquantitative screening, quantitative and/or confirmatory analysis.

The performance characteristics to be investigated are listed below and represents current best practice: analyte stability; ruggedness or robustness testing; linearity and calibration curve; analytical range; sensitivity; specificity and selectivity; accuracy and recovery; precision including repeatability and reproducibility; measurement uncertainty (MU); sample stability; method comparisons; limit of detection (LoD); and limit of quantification (LoD).

The majority of these terms have specific definitions which can be found in international guidance document, for example, IUPAC.[1] Whenever available, an authoritative definition issued by a recognized independent scientific or regulatory authority should be referenced. It is important to be aware that regulatory and application specific differences exist which impact the manner in which the validation study is performed. Within the EU the performance requirements for veterinary residue analysis are defined within Commission Decision 2002/657/EC[2] and do not include LoDs, LoQs or MU but instead require the determination of other statistical indicators of reliability including the decision limit and the detection capability. Recently, efforts have been focused to harmonize analytical terminology and procedures in particular by the IUPAC’s working parties and the CAC.[3].

Identification of the “Fitness for Purpose” of an Analytical Method
The regulatory environment dealing with food safety analysis is primarily governed by the analysis and management of risk. This is evident from the requirement for validated analytical methods for toxicology studies to generate data on trace quantities of chemicals resulting from their use in food production or from their presence as contaminants, so that decisions on safe or acceptable limits may be made. Validated methods are also required to identify suitable marker residues and target tissues for sampling programs, to understand the mechanisms involved in the presence of residues and contaminants in foods and establish permissible limits where possible. Validated methods of analysis are required to monitor compliance of food products with these limits for national survey samples and also for dietary intake studies. The data generated in all of these types of studies is used to set and enforce food safety standards and hinges on the reliability of the analytical methods used and ultimately the analytical quality assurance and validation processes.

The consequences of inaccurate data can have wide reaching consequences which can be measured in a variety of terms including; economic, legal and consumer safety. Regular testing is performed on imported foodstuffs entering the EU borders, consignments found to contain violative residue concentrations may be condemned and destroyed at the expense of the producer. On occasions, it is necessary to defend analytical results under nonstatutory law and there are many examples where foreign food manufacturers are unaware of the presence of unauthorised residues being present in their produce due to the use of inappropriate analytical methods prior to export. One such example is a legal case that ensued as a result the confirmation of chloramphenicol in imported poultry muscle at concentrations exceeding the EU reference point for action (RPA) of 0.3 μg/kg during the border inspection point tests using liquid chromatography (LC)-mass spectrometry (MS)/MS analysis. The food manufacturer was confident that the chicken meat was compliant with EU standards, as his company ensured that every consignment was checked using enzyme-linked immunosorbent assay (ELISA). The “fitness for purpose” of the analytical methods were examined during course of the legal case it was revealed that the LoD of the ELISA test used was 5 mg/kg, approximately 16,000 times in excess of the RPA of 0.3  μg/kg. Other cases exist where official reference methods have delivered false noncompliant results and food manufacturers have been successful in demonstrating the compliance of their produce via LC-MS/MS analysis operated in selective multiple reaction monitoring mode using reference standards and ion ratio matching to demonstrate the response reported in the screening assay was not the target analyte but actually an unrelated interference giving rise to a false signal.

There are some fundamental issues to consider, it is essential to know precisely what the regulatory and performance requirements that must be achieved are to ensure the method is “fit for purpose.” This includes sufficient information on the compound of interest including the metabolic or degradation pathways, the matrices that will be included in the testing program and the concentration range to be targeted in accordance with any established regulatory limits. The performance criteria applicable for the technology used must also be considered. For example, guidance on the fitness for purpose of MS methods from the American Society for Mass Spectrometry[4] includes certain basic principles found in most documents describing confirmatory analysis using MS: the use of reference standards analysed contemporaneously with unknowns; three or more diagnostic ions (except for exact mass measurements) and the use of relative abundance matching tolerances for selected ion monitoring.

In summary, the establishment of a comprehensive, globally harmonized (and enforced) set of performance criteria and validation parameters is critical to ensure that analytical methods are appropriately validated, and their fitness-for-purpose is consistently demonstrated in the hands of each user. Such a harmonised system will serve to build trust in the efficacy of production control practices and ultimately enhance protection of consumers against unsafe food and protect producers from financial loss associated with inappropriate testing and potential food wastage.

Sara Stead, Ph.D., is senior strategic market development manager – food and environmental at Waters Corporation.

References
1. old.iupac.org/publications/analytical_compendium/.
2. Commission Decision 2002/657/EC. 2002. Implementing Council Directive 96/23/EC concerning the performance of analytical methods and the interpretation of results. Off J Eur Commun L221:8.
3. www.codexalimentarius.net/download/standards/11357/cxg_072e.pdf.
4. Bethem, R. et al. 2003. J Am Soc Mass Spectrom 14:528–541.