My entire career is dedicated to contamination control, whether it involves food, water, air, institutional housekeeping, laundry or manufactured goods. I learned that the basic, scientific applications used in contamination control remained the same with each system—only the vocabulary differs within each industry. The basic objective of contamination control is to keep the contaminants out: for those that cannot be kept out, minimize their entry; for those whose entry cannot be minimized, keep them from dispersing or multiplying; and for those whose movement and propagation can’t be prevented, get rid of them.

At each point along this continuum, we have to know the scope of the problem, along with how, where, when and why to introduce any cost-effective controls. All of this begins with observation—as in inspection, auditing, enumeration—and sampling. This, in turn, brings me to the point of this column’s topic: selecting appropriate sampling strategies that will ultimately guide a course of action, whether it is in the area of food production or, as is case with many practitioners in my profession, regulatory control and oversight.

Strategy Selection
My introduction to selecting sampling strategies came while I was working at Wayne State University. I decided to broaden my knowledge of field sampling by learning as much as I could about how it was done. The only resources that were readily available in southeastern Michigan were offered by the automotive and pharmaceutical industries. These industries regularly sponsored a series of tutorials on monitoring and process validation, which I was fortunate enough to attend. My epiphany came while listening to a lecture sponsored by The American Supplier Institute Inc. on quality engineering, specifically, designing quality into products and processes. The author of the system being presented in this lecture was Dr. Genichi Taguchi of the Asian Productivity Organization. The unique concept he developed to look at tolerance and experimental design involved a series of orthogonal polynomials, a no-nonsense approach. While this method of sampling is better suited to mass production of food products than to regulatory compliance, it did teach me about selecting specification tolerances, the importance of eliminating bias and the variables to consider in setting up a sampling scheme. Subsequently, this information translated quite well into my world of food safety inspections and audits.

Armed with this conceptual knowledge, I soon realized that my approach to sampling as part of regulatory food safety inspections was not exactly fair or correct in many instances. I, like most of my colleagues, simply stuck a thermometer into some food and declared it safe or unsafe without realizing the bias we unknowingly and unwittingly were introducing into the sampling process. All too often, a critical violation was either erroneously cited or totally escaped detection because of inherent failures in the method of sampling, rather than any error in the instrumentation used. To confound this vital gap in the knowledge of regulatory practice, the Food Code only specifies outcomes, not how to get there. It does not guide the inspector through a methodology of determining either sample size or where the samples should be taken, to name just a few critical parameters in the larger scheme of sampling science.

Strategy Types
So, after that long, drawn-out introduction, I’ll try to cut to the chase. All field sampling strategies fall into two, distinct categories: probability and non-probability. Probability sampling is easy to interpret; errors can actually be calculated and bias is therefore minimized. On the downside, it requires a more elaborate sampling plan as well as more samples. It adds time and, ultimately, cost to the inspection process. Non-probability sampling is non-random; it introduces bias (not in favor of our clients), thus making it more difficult to interpret the sampling results and, more importantly, sampling errors cannot be calculated. “Garbage in, garbage out” takes on a whole new meaning when applied to non-probability sampling. On the upside, it’s fast, cheap and requires fewer samples. Both strategies—probability and non-probability—have merit if properly applied and documented.

Probability sampling should be favored by the regulatory community whenever possible and practical. Unfortunately, in most cases, it isn’t, much to the detriment of food safety. Probability sampling presents an unbiased and objective view of the conditions or products sampled. There are two basic, categorical strategies in probability sampling: systematic and random; both are ideally suited to the art and science of food safety.

Systematic sampling relies on the sampler’s experience with the product sampled and any available information about the sample set. This may include knowledge about the ingredients, assembly and preparation, cooking method, display and service, among other parameters. Basically, systematic sampling is used to find a gradient, for example, contaminants introduced in a production line or temperature variations from changes in cooking or chilling. Because it relies on a gradient, there is a need to develop a consistent grid or pattern for each set sampled. Using a very simplistic example, to measure the temperature of a hotel pan of food (such as lasagna) that is offered at a buffet table, a thermometer is inserted into several points to a given depth along the diagonal of the pan. This process measures the temperature gradient; each point is recorded and an average temperature is calculated from the collected data. While some bias is introduced and numerous samples are required, it follows that the more samples taken, the less bias is introduced into the interpretation of the results. In short, systematic sampling produces a fair representation of the temperature of the product sampled.

The ultimate sampling strategy, however, is random sampling. Random sampling depends on the theory of random chance probabilities to choose the most representative sample(s) from the lot in question. It introduces the least bias into the sampling strategy, but requires the greatest number of samples, depending on the level of confidence needed. From what I do in my practice, random sampling has definite application in evaluating purported allergen contaminants in an assemblage of product, such as peanut parts in toasted sesame seeds. While 100% sampling would be ideal, if it were used, there would be no product left over. However, through random sampling, a probability can be established that is fair and equitable to all parties. To accomplish this, there are two American National Standards to choose from: ANSI/ASQ Z1.4 and Z1.9. All that is needed in addition to these standards is a table of random numbers that is easily downloaded from the Internet.

Sampling Standards
Z1.4 is a standard containing sampling procedures and tables for inspection by attributes (here is where the variability comes in) and is best described by quoting its abstract: “...an acceptance sampling system to be used with switching roles on a continuing stream of lots for AQL (Acceptable Quality Level) specified. It provides tightened, normal and reduced plans to be applied for attributes inspection for percent nonconforming or nonconformities per 100 units.” Z1.4 is an updated version of the old MIL-STD-105E and most closely represents the “traditional” random sampling strategy we all learned in college.

Its companion is ANSI/ASQ Z1.9. This standard contains sampling procedures and tables for inspection by variables for percent nonconforming. Its definition is also best described by its abstract: “…an acceptance sampling system to be used on a conforming stream of lots for AQL specified. It (also) provides tightened, normal and reduced plans to be used on measurements which are normally distributed. Variation may be measured by sample standard deviation, sample range or known standard deviation. It is applicable only when the normality of the measurements is assured.” This standard is therefore ideally suited to sampling food on a production line or used in conducting an audit where the parameters of the things under investigation are prescribed.

Please don’t be intimidated by the wording. These two ANSI Standard sampling strategies are quite easy to learn, apply and interpret. With a little practice, one can become quite proficient in their use. The more they’re used, the greater becomes your familiarity with their benefits and limitations. I never found anything better for field use than these two standards, particularly when the outcome supports the arguments in a litigious scenario.

Non-probability Sampling Approaches
Non-probability sampling strategies represent the bulk of regulatory inspection sampling protocols. Non-probability sampling is characterized by three distinct approaches: convenience, judgmental and one affectionately called “snowball sampling.” In all fairness, all three must be justified and detailed on the inspection report if employed as a regulatory tool. I listed many of these variables in my previous column.

Convenience sampling is the most frequently used by the sanitarian. It’s the most intuitive but also the one that has the least repeatability. By its very definition, it is expeditious and exploitative. It takes advantage of convenient locations or situations and requires the fewest samples. When used correctly, it produces an inexpensive approximation of the conditions under test. It is, therefore, a good screening tool or an exploratory sampling strategy in order to determine which probability sampling strategy is best suited to answer any questions posed by the inspection or audit. However, by its very nature, it introduces bias into the outcome and should be justified and detailed in writing if used as a definitive regulatory tool. I can’t overemphasize this last point.

Judgmental sampling is similar to systematic probability sampling in that it relies upon experience and available information. However, this is where the similarity ends. It is best applied when the source of contamination or other deviations from standards are known. Although it is a bit more formal than convenience sampling, it too requires few samples to reach an outcome or conclusion. However, by its nature of being subjective and judgmental, it introduces bias into the sampling method. The data presented by this sampling strategy should always be prefaced by a statement describing the situation and detailing the samples, including any immediate environmental conditions. In doing so, you validate your experience and elucidate available information so that any corrective measures taken as a result of the data presented will be targeted to the problem.

“Snowball” sampling is the most fun. However, it is serious business and is effectively used when a characteristic is rare. In other words, snowball sampling is the ultimate screening tool. It relies on referrals and outliers to generate data that lead to the next sample set or site. It is best exemplified by the short nursery rhyme “The Siphonaptera:”

Big fleas have little fleas,
Upon their backs to bite ’em,
And little fleas have lesser fleas,
And so, ad infinitum.

This sampling strategy introduces the most bias and therefore should never be relied upon as an indicator of compliance. The beauty of snowball sampling is when it becomes a tool to solve problems. It allows for samples to justify avenues of exploration that might go in either direction and, more importantly, it can provide the question for which some more formal sampling methods are used in finding the answer. In reporting the outcomes of snowball sampling, it is most important to provide a written statement on the chain of events, describing in detail each sample that led to any conclusion reached. The reader needs to understand the rationale for choosing this sampling strategy.

So there you have it. For those of you who stuck with me to the bitter end of this column, congratulations. You will make better sanitarians by knowing how to provide the most objective reasoning to your clients through the sampling process.

Forensic sanitarian Robert W. Powitz, Ph.D., MPH, RS, CFSP, is principal consultant and technical director of Old Saybrook, CT-based R.W. Powitz & Associates, a professional corporation of forensic sanitarians who specialize in environmental and public health litigation support services to law firms, insurance companies, governmental agencies and industry. For more than 12 years, he was the director of environmental health and safety for Wayne State University in Detroit, MI, where he continues to hold the academic rank of adjunct professor in the College of Engineering. He also served as director of biological safety and environment for the U.S. Department of Agriculture at the Plum Island Animal Disease Center at Greenport, NY. Among his honors, Powitz has received the NSF/NEHA Walter F. Snyder Award for achievement in attaining environmental quality, and the AAS Davis Calvin Wagner Award for excellence as a sanitarian and advancing public health practice. He is the first to hold the title of Diplomate Laureate in the American Academy of Sanitarians, and is a Diplomate in the American Academy of Certified Consultants and Experts and with the American Board of Forensic Engineering and Technology.

Dr. Powitz welcomes reader questions for discussion in upcoming columns. Feedback or suggestions for topics you’d like to see covered can be sent to him directly at Powitz@sanitarian.com or through his Web site at www.sanitarian.com.