In August, Food Safety Magazine Editorial Director Julie Larson Bricher had the opportunity to speak with several recognized experts from industry, research and government to discuss hot topics in food microbiology. This roundtable-in-print article is based on part of that discussion, which was co-moderated with Rich St. Clair, Industrial Market Manager with Remel, Inc. and took place at the International Association for Food Protection annual meeting in Calgary, Canada.

In Part 1, published in the October/November 2006 issue of Food Safety Magazine, the panelists discuss the top microorganisms of concern to the food industry, public health risk assessment issues, and provide some insights into significant advances in test methods and tools available to the food supply chain today.

In Part 2, the discussion continues on risk assessment with an emphasis on challenges associated with the adoption and application of food microbiology methods, and how the food industry can strategize to meet these challenges and realize the benefits of traditional, new and hybrid techniques.

The Panelist:
J. Stan Bailey, Ph.D., is a research microbiologist with the Russell Research Center, Agricultural Research Service, U.S. Department of Agriculture, in Athens, GA, where is responsible for research directed toward controlling and reducing contamination of poultry meat products by foodborne pathogens such as Salmonella and
Listeria. Bailey has authored or coauthored more than 400 scientific publications in the area of food microbiology, concentrating on controlling Salmonella in poultry production and processing, Salmonella methodology, Listeria methodology, and rapid methods of identification. He is currently vice president of the executive board of the International Association for Food Protection.

Mark Carter is General Manager of Research with the Silliker Inc. Food Science Center in South Holland, IL. He is a registered clinical and public health microbiologist with the American Academy of Microbiologists and chair-elect of the American Society for Microbiology’s Food Microbiology Division. Prior to joining Silliker in 2005, he served as a Section Manager for Microbiology and Food Safety for Kraft Foods North America where he was responsible for the Dairy, Meals, Meat, Food Service and Enhancer product sectors.


Martin Wiedmann, Ph.D., Associate Professor, Department of Food Science, Cornell University, is a world-recognized scholar, researcher and expert on critical food safety issues affecting the dairy/animal industry.Wiedmann addresses farm to fork food safety issues with a diverse educational background in animal science, food science, and veterinary medicine. His work with Listeria monocytogenes is internationally recognized and has significantly contributed to improving our understanding of the transmission of this organism along the food chain.

Margaret Hardin, Ph.D., is Director of Quality Assurance and Food Safety with Boar’s Head Provisions Co., the nationally known ready-to-eat meat and cheese processor serving the delicatessen and retail markets. Previously, Hardin held positions as Director of Food Safety at Smithfield Packing Co., Sara Lee Foods and the National Pork Producers Council, and as a research scientist and HACCP instructor with the National Food Processors Association in Washington, DC. Her efforts have been directed in areas of food safety, research, HACCP, and sanitation to protect the public health and assure the microbiological quality and safety of food.


Joseph Odumeru, Ph.D., is the Laboratory Director, Regulatory Services, Laboratory Services Division, and Adjunct Professor, Department of Food Science, University of Guelph. He is responsible for food quality and safety testing services provided by the division. His research interests include development of rapid methods for the detection, enumeration and identification of microorganisms in food, water and environmental samples, molecular methods for tracking microbial contaminants in foods, automated methods for microbial identification, shelf life studies of foods and predictive microbiology. His research publications include 65 publications and review papers in peer review journals, 70 abstracts and presentations in scientific meetings.

Julian Cox, Ph.D., is Associate Professor, Food Microbiology in Food Science and Technology at the School of Chemical Sciences and Engineering, University of New South Wales, Sydney, Australia. He has taught for over a decade in the areas of foodborne pathogens, spoilage, quality assurance, rapid microbiological methods and communication skills. His research activities revolve around a range of foodborne pathogens, particularly Salmonella and Bacillus. He provides advice on food safety through organizations such as Biosecurity Australia and input into the development of Australian standards for microbiological testing of foods. He also sits on the editorial boards of Letters in Applied Microbiology, the Journal of Applied Microbiology and the International Journal of Food Microbiology.


Food Safety Magazine: Earlier, our conversation moved into the realm of risk assessment as it pertains to public health. What are your thoughts on how risk assessment informs or affects method development?

Julian Cox: As I listen to what people are talking about here, I hear the concern as to how far should we take things? This is a really interesting point in terms of the number of susceptible individuals among the population and the organisms that put their health at risk. Today, we spend more and more time focusing on Listeria. But you know, risk has an impact. We should ask who does it hit, and at what percentage? Are we spending far too much effort in terms of human resources, money, etc., with regard to method development to chase organisms that really do not have widespread impact in terms of the general population?

You know it is sort of the notoriety thing, the bug that gets the most press (i.e., public awareness) gets the most dollars thrown at it in research monies, or regulatory or industry implementation costs. When the headline press reports “Pregnant Woman Aborts Because of Listeria,” we all agree that is terrible. We need to look at the economic effect of foodborne disease, then really what are the organisms that are going to continue to be important, or organisms that become more important within that analysis. E. sakazakii is another example of what I am talking about because it affects a minute percentage of the population.

Martin Wiedmann: Yes, that’s why it’s an opportunistic group.

Julian Cox: Opportunistic, and also who is impacted by that group?

Martin Wiedmann: If you look at impact, that’s a huge impact.

Julian Cox: Yes, exactly. But on a global basis, is it really that huge?

Martin Wiedmann: But the impact is very important, because when when you are the one affected it matters. In other words, I would argue that it is not a global issue, but a local one. When it affects my normal life—my child, my family—it is a local issue. Although there are a very small number of cases associated with the contamination of powdered infant formula by E. sakazakii in the last several decades, this bacterium poses serious harmful effects to infants. That’s a real concern to parents with children under six months of age.

Yes, a pathogen’s global impact on populations is important, but again it depends on the food and where it comes from.

Margaret Hardin: Bottom line is that no one wants to see a little child seriously injured or harmed by a pathogen in their food.

Mark Carter: And we do want to consider how big of an economic impact the pathogen has. Does it impact globally? Going back to what I said earlier, I don’t think we have good surveillance to be able to verify statistically the true impact of any given bug. I mean we can say E. sakazakii only affects this group. But if you are only looking for it in one group or you don’t have good enough surveillance to know that’s what could have been the cause of an illness within other parts of the population, or within the target population, then you can only gauge it on what you know. And remember, we are nowhere near having the type of global method harmonization to be able to say that we have the best method to detect this going on everywhere.

Julian Cox: Yes, I think that we can say that even about an organism as traditional as Bacillus. We say it anecdotally, based on facts suchas what people have eaten, the symptoms, and how acute the intoxication. In many cases, however, we are still making assumptions. We have listed a number of pathogens of concern over the years, and in many cases, that list, while there is some good evidence, is certainly incomplete.

Joseph Odumeru: Before we move on from this question I’d like to mention a few things that have to do with risk assessment. An important aspect of food safety that is gaining a lot of attention is pathogen risk assessment for foods. If you do risk assessment, it is still important to look at population exposure to pathogens. And one thing that we haven’t done very well in food microbiology is to develop methods for pathogen enumeration that can help us do these risk assessments. Currently most pathogen testing is expressed as presence/absence and do not quantify pathogens. Although we have zero tolerance in place for some pathogens, what about Listeria and especially in cases where zero tolerance is not possible? So again, if you don’t test your product you get easy compliance with zero tolerance, but if you do test it then you know that it can be found in the product.

In our lab, we are beginning to see a lot of requests from regulatory agencies for pathogen enumeration that will provide information that will help them assess the risk of certain pathogens in various food matrices—essentially, what level of pathogens are we are exposed to given certain parameters. I think this is an area that requires further method development to quantify pathogen levels. If you look at the scientific literature, you’ll find a lot of useful methods for pathogen detection but a very limited number of methods for pathogen enumeration, such as the most probable number (MPN) method. Right now, there are not many methods available for measuring pathogen levels in foods for use in risk assessment.

Food Safety Magazine: This points us back in the direction of challenges associated with the adoption or application of advanced food microbiology methods or tools in the food industry. How can these challenges be addressed or overcome to realize the benefits of new techniques?

Mark Carter: I think instrument and method validation is one of the toughest challenge areas to deal with in the field of food microbiology. Or more specifically, what’s the right way to validate microbiological methods and instruments? Many standard-setting bodies have done a good job setting up a framework for validation. But even with the AOAC International framework, for example, there are different levels of validation and the challenge is to decipher what validation studies truly mean performance-wise. I personally think that we have to rely on validation of methods and/or instruments on an individual basis because you can’t validate every test against every matrix for which it could be potentially used.

In addition, there is some level of validation that has to be done by the end user, although I think that food companies can be responsible for only a given amount of method validation responsibility. However, every food company should be able to say “We’ve done our work, the method or instrument works under these particular conditions and we can verify that,” based on some good oversight and given a good statistical validity to what they’ve done in their self-validation approach.

In my experience working in industry, there have been instances where you get a method that has been validated for something, then you use it against your product and it doesn’t particularly work the way it says it’s supposed to work for the matrix to which you apply it. At that point, you have to take a step back, and say, ‘Okay how can I go back to get equivalent performance to my previously used standard method?’ So, at some level there has got to be some internal validation done or a validation/verification applied for either an automated microbial enumeration, identification or detection system or for a new food microbiology method, whether it’s polymerase chain reaction (PCR) or a method such as plate counting, or a novel cultural method such a chromogenic agar or selective enrichment broth.

This is why there is security in traditional methods. Many companies use these as a basis for their work. There will always be a place for traditional methods because they are the base we use to build upon. For many companies, use of traditional microbiological testing methodology remain a constant due to repeatability of tests, timing, costs and internal controls. And even these should be internally validated on a regular basis.

Ultimately, I would say that it is a good thing to have science-based organizations such as AOAC International that provide some levels of and approaches to validation and/or verification criteria and standards for food microbiology methods and instruments. But again, that’s a baseline for industry companies, who should be doing some work on their own to validate these technologies and techniques.

Margaret Hardin: That is a good suggestion. And I would say again that when you’re holding onto $7 million worth of product while waiting for test results, you should already have some assurance that the test method is validated.

Stan Bailey: Yes, obviously you have the challenge that you always have: the test has to be precise and it has to do whatever it is that it claims to do. So you have to be able to challenge a food microbiology system or methodology with a full range of possibilities that might be encountered in real-life situations. You can’t allow yourself to be lulled to sleep by a sales pitch on the strengths of a system, because even if data show that the method works beautifully when applied to many food matrices there may be some phenotypic variations in the organism you are looking for that won’t be detected by that system.

So in order to validate a system you need to have an adequately robust process in place, and I would never just accept company or even AOAC validation data without myself testing my product first. Each product has its own unique set of issues, background microflora and potential inhibitors depending on the test you are running, and you really need to personally validate that something works with the type of system and product you’ll be testing.

Martin Wiedmann: Method validation is an area where food microbiology is very challenging as compared to pharmaceutical and other industries that require testing, because the range of matrices is so broad. It’s not like clinical diagnostics for which we do a lot of work in our lab. We see a lot of test kit or system manufacturers come in with assays that have been assigned for clinical diagnosis and then think they can move into the food testing market and make some money there. But they forget that there are thousands of different matrices in food. And further, some of those matrices are company-specific and some are commodity-specific, which adds to the complexity. I think that is a huge challenge, and I don’t even know how you can validate any method for every matrix. We know that a particular microbial detection method might have worked great for 99 of 100 food matrices, but then when applied to smoked salmon, which has a high fat content, that method just doesn’t work.

Stan Bailey: Also, one of the things we’ve often talked about with regard to new methods coming along is that you don’t want a different technology for everything you are doing in the lab. Maybe using different technologies for indicator organisms or screening is appropriate, but if you are talking about pathogens you really need to use one basic technology for everything so you have one capital cost platform, one set of skills that your technicians need to learn, and one set of expendables for the various pathogens. You don’t really want to use PCR for one thing, an ELISA for another and a gene analysis for yet another matrix you are testing.

Ultimately, I would say that the test system has to be accurate—it doesn’t matter if it is easy to use or if you choose a system because that is the single technology platform that works for most of your testing needs. First and foremost is that it has to be accurate and then you worry about the rest of the considerations.

Julian Cox: Yes, and I second Mark’s remarks about method validation being a challenge, but like Margaret’s comment earlier and Stan’s statement, we are also talking about the challenges of capital costs and expenditures associated with microbial detection systems; ease of use, detection sensitivity, accuracy and speed of those methods and systems; and the ease of data collection/interpretation.

I am an absolute believer in taking an industry-pragmatic view. We always talk about platform technologies because you don’t necessarily want to have to fill your lab with one box that does this and another box that does that. I mean, we can even bring that back to some manual methodology rather than having to change technologies and say, ‘Well, I am just going to do a standard culture method instead of a laser type format immunoassay for this matrix.’ I think this is an area that the industry needs to consider. Can we get a “box” that really does speed things up, automate things, etc., so that we can do an increasing number of assays in order that the hundred thousand dollars I am investing is going to be as worthwhile as possible.

Stan Bailey: And, if you change methods, you have to understand the relationship between the new and the old method—if I am using Method A for 10 years to test for the level of Salmonella in Product Z, and I now have a new method B, then I have to understand the relationship of Test B and Test A so that I can relate what we see over the course of time. I think that it is really important that people unerstand the limits of detection, the challenges of stressed cells, the chalenges of low numbers of your test organisms versus high backgrounds, and understand how your method works with all that—it is critical that you have an understanding of all of those issues so that you can properly interpret the data you receive from the test.

Margaret Hardin: It also comes down to the labor. When I look the cost of the test I look at the labor involved, the overtime involved, how long it takes to transfer, and how long it takes to set up. I don’t just look at the cost of the method and related consumables. I am asking lots of questions: Is this technology going to be obsolete in six months, like a computer tends to be by the time you get it in the mail? Is that $35,000 to $50,000 investment going to be worth it in the long-term. Because I know it will be difficult to justify buying a new piece of equipment every two years. And, is the level of expertise in my lab adequate to deal with that piece of instrumentation, not only a day-to-day operational basis but to be able to troubleshoot it because if it’s not working then what do I do?

So, I can’t just put all my eggs in one basket. I am producing every single day and perhaps shipping several times a day. I have got to have a method that I can depend on, true, but I also need to know that I have personnel who are adequately trained to run the test properly so that I know the level of risk for cross-contamination in the lab. The bottom line is that not all laboratories have a full staff of Ph.D.-level technicians in $11 million corporate lab facilities. So you need to make sure your technicians are trainable and that the method you expect them to use is very easy to apply to signficantly reduce the chance for cross-contamination originated from the lab and not the processing environment. Often, when we send test samples out to a third party laboratory that’s the first question we have when we get a positive: What was your marker on that organism? Are you sure contamination of the sample didn’t happen in the lab itself? Because if as a result of a positive I’ve got to throw away millions of dollars in product, we want to make sure that the environment in our lab did not contribute to the problem.

Joseph Odumeru: That is the question, indeed, Margaret. If a test sample comes up positive and we have never had a positive for this product before, we need to ask why? Then I will expect an answer to that question. It is necessary to make sure you explain exactly what your lab quality control protocols are and to seriously check to make sure cross-contamination didn’t happen in your lab.

Margaret Hardin: I would add that it is important to make sure the method is sensitive. The ability of a test method to provide quanitification is great, but what does it mean to my operation if I find 10 or 100 Listeria? I also don’t need to chase a lot of false positives and false negatives. I need to make sure that I have a test that is going to give me what I need to know, because if you are getting false readings and you are putting engineering, maintenance and production on alert that we have to make some major change in that room or that area and that may have to occur over a weekend and it may require a capital expense, then you want to make sure that you are getting a fairly good level of accuracy and sensitivity from that test.
Obviously, the challenge is to determine how sensitive we need to be. If I switch to a new technology that is much more sensitive than the previous method used, I am very likely going to get more positives, and that is a challenge when explaining to the plant, to corporate or to the president of the company. Our objective is to search and destroy foodborne illness-causing microorganisms, so whatever information we can gather that will help improve our process control and prevention strategies is vital to our food safety mission.

Julian Cox: As a scientist, I sort of stay on the fundamental side. I want to play with methods and it’s great if we can get closer to the truth for all the altruistic reasons why we want to know more about the pathogens that cause harm to public health. But we can also ask, in a realistic sense, if we increase the sensitivity of a method from 99% to 99.99% accuracy, what does that extra .99% really do for us in terms of impacting food safety and public health? How far do we go to answer the questions, ‘Is it there or not, and how much deeper do we dig if that .99% increase in sensitivity provides data that can help us to institute improved approaches that prevent foodborne illness or fatalities in a few individuals versus thousands of individuals?’

In a pragmatic sense, these are the types of overarching challenges that we could spend more than the time the panel has here to discuss and explore. At the end of the day, innovative science and the advances this fosters is key to developing better ways to ensure food safety and quality throughout the global supply chain.

Editor’s Acknowledgment: Thanks to Rich St. Clair of Remel, Inc. and Brian Kemp of Oxoid Canada,for their support in facilitating this panel with the Food Safety Magazine staff, and to our expert panelists for their participation. We look forward to holding similar roundtable discussions in the future, and encourage FSM readers to submit ideas for future topics in testing and food safety to Julie Larson Bricher, Editor, at julie@food-safety.com.