Food Safety Magazine

SANITARIAN'S FILE | April/May 2010

Sampling, Part I: The Basics

By Robert W. Powitz, Ph.D., MPH

Sampling, Part I: The Basics

Sampling is part of inspection; sampling and inspection cannot be separated. In the context in which we use it, sampling is a process or technique used in food safety to select a representative part of a food or foods, or things and conditions having to do with food for the purpose of determining parameters or characteristics of the whole. That is a long-winded way of saying that we select a small part of the whole as a sample for inspection or analysis. This may include time and temperature measurements; measurements of cleanliness; evaluations of handwashing; and taking light, water activity and chemical concentration readings—just to touch upon a few of the things at which we routinely look.

Sampling is not a difficult concept until we realize that in producing a set of samples that are representative of the source under investigation (and possibly suitable for subsequent analysis), the axiom “garbage in equals garbage out” takes on a whole new dimension.

Sampling Objectives
To give some clarity to this concept, sampling has several interrelated objectives when it comes to the sanitarian’s role in the field. First, there are numerous regulations that border on shades of gray. Their endpoints are blurry at best, and in many instances, one regulation is dependent upon another for an intended outcome. The act of sampling helps us to define an endpoint. More often than not, it provides a defined target for the operator in an otherwise nebulous world.

Second, sampling validates objective measurements. These measurements are clearly articulated in the regulations through some finite numbers, generally given as minimum values. Sampling lets us know how and to what extent our target meets these values. There is probably more abuse associated with this objective than any other. These minimum values do not have ranges associated with them, and yet we tend to measure them with instruments that do. Additionally, we know that the more finite a value, the more variables can be identified that affect that value.

Third, a good sampling strategy refines subjective observations. People often see the same thing differently and each of us ascribes our own interpretation to subjective parameters. For instance, the phrases “clean to sight and touch,” “adequate” and “as necessary” are quite open-ended. Whenever possible, sampling can give some objectivity to these seemingly meaningless requirements.

While these objectives are all well and good, samples that are not representative of the source are of little use. Likewise, poor collection or detection procedures yield unrepresentative samples and contribute to the uncertainty of the analytical results. All too often, I see this among my peers, but also with third-party providers such as independent auditors, as well as with in-house quality control personnel. We are all guilty to some extent and yet we wonder “why?” when something goes wrong.

To put this in some perspective, I was recently retained as an expert witness for the defendant in an alleged foodborne outbreak. What gave me pause and prompted this article was a simple entry on an official inspection form that stated, “Gravy was at 81 °F.” This was listed as a critical violation. In explaining the entry, the inspector wrote, “Gravy should be kept at 140 °F.” Wow. Taken at face value, the explanation doesn’t seem that far afield, except when a lawsuit hangs in the balance. There was no time given for the sample; the method of sampling and the tool that was used were not mentioned; and the regulation under which it was cited was not listed (as we all know, there are several conditions in the Food Code where gravy can indeed be in compliance at 81 °F). There was no statement of purpose, and no circumstances or ambient conditions were described. But most importantly, there was no hint as to whether the sample was taken with or without bias. There was no mention of whether a probability or non-probability sampling strategy was used (the topic I will cover in part 2 of this series). In short, the objective of producing a sample or set of samples representative of the source under investigation was clearly not met. Even under the best of circumstances, no sanitarian trying to follow this inspection would or could know where to begin to look for the gravy, or even what to look for.

The Sampling Plan
While I don’t want to belabor this point or make it more complex than it already is, suffice it to say that everything we do needs a plan—sampling included. I can’t think of too many instances where we embark on an official program and do not have a clue where we’re going or what we hope to accomplish when we get there. But somehow, sampling sometimes escapes this logic. A sampling plan is a simple roadmap that answers six basic questions:

1. What is the objective of collecting samples? I’ve provided a clue above, but it requires a bit more information. This is not difficult, but we have an obligation to explain what, where, how and why we take a sample. Repeatability and an acceptable outcome are really the goals.

2. What types of samples are needed? This is a bit more complex and may introduce bias if not well articulated. Food quality takes on many forms and food safety has many parameters. A clearly defined set of goals will generally point to the types of samples needed. Even our Food Code has multiple parameters, some of which have more to do with physical and chemical hazards than with microbiological safety.

3. What are the best sampling locations? I think the best example of this is sampling a hotel pan of freshly prepared lasagna to determine adequate holding temperature. From the center of the pan to its outer edge, the temperature can vary by as much as 10 °F. Selecting the sampling site or sites often determines a pass or failure. Likewise, we all know the “sweet spot” in a walk-in refrigeration unit. Where we take our sample can often determine the rate of cooling. The list goes on, but you get my drift.

4. How many samples are required? How many sample sites does it take to declare something unfit? In sampling parlance, how many samples are needed to meet the acceptable quality level with some statistical certainty? Do I shut down an operation based on one sample? Don’t laugh, it’s happened.

5. What equipment will be used to collect the samples? The specific equipment should be listed as well as something about its calibration or validation. An 81-°F gravy can be anywhere from 79 °F to 83 °F if measured with a bimetal thermometer, and approximately 80.5 °F to 81.5 °F when a thermocouple is used. This becomes critical when the reading is on the cusp.

6. How will the results be interpreted? I sincerely believe that there is a consequence for each action. The way we interpret the results of our sampling will, in turn, determine the level of compliance as well as the sustainability of that compliance effort.

Managing the Data
Building on these last points, data analysis has two purposes. First, analysis from sampling values should be designed to draw a central theme out of the data. In other words, if I have a set of temperature readings, and those readings fail to meet the minimum criteria set forth in the regulations, a good analysis should provide a clue as to whether the solution to the violation is administrative, engineering or a combination of both. I’ll never forget the results of a jail lighting inspection where three luminaries in a dormitory were cited as being defective. Upon a re-inspection, those three luminaries were in good operating condition, but the other 27 were in various states of disrepair. Yes, the deficiency was corrected, but the way it was presented in the analysis simply begged for another deficiency. Unfortunately, the judge in this case could not see the logic in all of this.

Second, the data analysis should answer the questions that were posed before any samples were taken. See the section above. There is no sampling scheme that has any validity if there is no sampling plan, and no analysis has any worth if it does not follow that plan.

In the presentation of sampling data, there is a short list that is essential in meeting the intended purpose. It starts with the introduction, which is a short statement of purpose and includes the data of sampling; a brief description of the ambient conditions and activities at the time of sampling; the date and time; the persons on site; and any comments necessary to clarify a finding.

In classifying the observations, the data should be sorted and findings presented based on importance and relevance. Importance can be adjudged based on either repeated occurrences, which make the data quantitative, or on one-time occurrences that pose a high risk for a food misadventure, which makes the data qualitative. In any case, once sorted, the data must be traceable to requirements—in our industry, the Food Code.

Presenting the Data
The capstone to a good sampling program is the presentation of evidence, which is based on the data we have just classified. To make this as simple and clear as possible, consider the following language in presenting conclusions based on sampling results:

Nonconformity. This is a violation of a requirement that can be major or minor; we need to distinguish between them.

• A finding is a systemic problem supported by observation and, in some instances, sampling results. It may not be a nonconforming issue but may have some significance in the greater scheme of things.

• An improvement point is an opportunity for making an improvement. It is not a violation and therefore cannot be classified as nonconforming. Nonetheless, it often comes by way of sampling when the results leave little room for error.

• A defect is a minor violation of little consequence. Examples of this fall under the floor-walls-ceilings-dunnage category of defects. They’re worth mentioning, but not dwelling upon.

• An issue of concern is a possible future problem that can easily become a nonconformity item. Every trained sanitarian regularly sees these issues, particularly with refrigeration equipment. A well-designed sampling program can often predict failures, pointing not only to the “if” but also to the “when.”

• Finally, sampling data can point to a positive practice or noteworthy achievement, highlighting some aspect of the system or process that is done very well and very effectively. This type of evidence presentation is a positive reinforcement and often leads to improvement in other areas of the operation.

So, there you have the basics of sampling. Hopefully, the issues discussed here will get you to think along the lines of developing sampling protocols without the ever-present ambiguity. In my next column, I will try to tackle the concepts of probability and non-probability sampling strategies and a simple way to structure a sampling program in the field.

Forensic sanitarian Robert W. Powitz, Ph.D., MPH, RS, CFSP, is principal consultant and technical director of Old Saybrook, CT-based R.W. Powitz & Associates, a professional corporation of forensic sanitarians who specialize in environmental and public health litigation support services to law firms, insurance companies, governmental agencies and industry. For more than 12 years, Dr. Powitz was the director of environmental health and safety for Wayne State University in Detroit, MI. He also served as director of biological safety and environment for the U.S. Department of Agriculture at the Plum Island Animal Disease Center at Greenport, NY. He is the first to hold the title of Diplomate Laureate in the American Academy of Sanitarians, and is a Diplomate in the American Academy of Certified Consultants and Experts and with the American Board of Forensic Engineering and Technology.

Dr. Powitz welcomes reader questions and queries for discussion in upcoming columns. Feedback or suggestions for topics you’d like to see covered can be sent to him directly at Powitz@sanitarian.com or through his Web site at www.sanitarian.com.

Categories: Testing and Analysis: Sampling/Sample Prep