BY DR TIM SANDLE | PHARMACEUTICAL MICROBIOLOGY AND CONTAMINATION CONTROL EXPERT
27th APRIL
Serial dilution is one of those laboratory techniques that looks almost trivial - pipette, mix, repeat. However, it underpins the accuracy of test data and the reliability of results. In the microbiology field, dilutions are required for bioburden control, testing water samples, disinfectant qualification, potency assays and preparing microbial cultures.
Performed well, serial dilution turns an uncountable microbial population into interpretable data. Performed badly, it becomes a silent root cause of OOS/OOL events, false alarms and ‘mystery’ trends that consume time.
At its core, a serial dilution is the stepwise reduction of concentration by transferring a defined volume of sample into a defined volume of diluent, mixing and repeating as needed. The reason it is needed is because many analytical and microbiological methods only work reliably within a limited ‘dynamic range’. For example, in plate counting, too many cells result in confluent growth, merged colonies and imprecise results - too few cells can lead to over interpretation of stochastic variation.
Practical dilution is therefore about engineering the sample into the method’s optimal target so that the result (a sample extracted from a population) reflects more closely to the real population rather than the limitations of the measurement 1.
Pharmaceutical microbiology invariably deals with heterogeneous samples: a process fluid may contain aggregates, clumps, stressed organisms, disinfectant residues or inhibitory components. When such samples are plated directly, the resulting colony pattern can be misleading: clumps produce ‘one colony’ that actually represents many cells, spreading organisms can mask discrete colonies and low level contamination can be missed if the selected sample volume is too small.

Image: Serial dilution (designed by Tim Sandle)
This is why guidance and training materials frequently emphasise that insufficient dilution can lead to confluent growth, making colonies indistinguishable and that overcrowded plates or clumps produce imprecise readings. Serial dilution is the standard way to prevent that, by ensuring colonies are separable and countable.
Serial dilution also assists when the sample contains inhibitory substances. This occurs, for instance, with endotoxin testing and certain biological assays - dilution can reduce inhibition or enhancement, so the method performs correctly.
When testing is not performed correctly, laboratory time is lost through an investigation. Hence, dilution training and test execution needs to be treated as part of method understanding, not as an afterthought 2.
Serial dilutions are often expressed as Dilution Factors (DF). A single dilution step might be 1:10, 1:100 or 1:2, depending on the method and expected load. In a classic 10 fold series, each step reduces concentration by one log10. A 1:10 dilution repeated three times yields 1:1,000 (10-3).
Mathematically, the overall dilution factor is the product of each step’s factor. In a 2 fold serial dilution series (common in susceptibility testing or some titration formats), each step halves the concentration (2-n). In both cases, the principle is the same: controlled reduction so that measurement and interpretation become reliable.
The importance of ‘log thinking’ is not only microbiological - it also matters in assay interpretation. For example, in the Fluorescent Antibody Virus Neutralisation (FAVN) method (a cell based assay), the test uses a logarithmic scale for serial dilutions and then transforms results into a linear unit (IU/mL). A key consideration is that points higher on the log scale can translate into larger differences on the linear scale, making the reading less precise if the analyst is operating in the wrong region of the curve.
Hence, serial dilution is not just a mechanical activity - it is part of optimising where the laboratory seeks to measure on a response curve.
In routine bioburden testing - whether pour plate, spread plate or membrane filtration - the aim is to produce plates that are readable and representative. Most tests involved either adding a measured volume of sample to a Petri dish and pouring agar (such as 1 mL) or streaking/spreading a smaller aliquot (often 0.1 mL), then incubating.
When a sample is too concentrated, those approaches yield heavy growth, while when too dilute, the analyst may see zeros that fail to support confident conclusions (especially for low bioburden processes where the ‘absence’ of a specified or objectionable microorganism requires careful interpretation).
This is also why experienced teams treat dilution selection as a risk decision. If the analyst anticipates that a sample might be high (for example, from early-stage processing), they should build a dilution series that spans low to high. Conversely, for very clean samples (e.g., late-stage processing), the analyst might select a minimal dilution but ensure sufficient volume and replication to maintain sensitivity.
The ‘real world’ complication is that microbial populations are not always evenly dispersed - some bacteria clump, some adhere to particles, some survive better than others. As noted in internal material, colony counting becomes difficult with overcrowded plates, clumps, translucent colonies, edge effects or organisms that spread across agar - exactly the issues serial dilution is designed to prevent 3.
When investigations reference ‘laboratory error’, dilution errors tend to sit near the top of the list given that they are easy to make and hard to detect once the plate is incubated. Many out of specification checklists require a check to be made as to whether correct dilutions were undertaken, along with correct glassware and pipette volumes, as part of investigating laboratory error. This is a practical reminder that dilution is not merely arithmetic - it is execution.
The most common serial dilution failure modes include:
The reason these errors matter is straightforward: the result often feeds into batch disposition, trending limits and deviation decisions. A single incorrect dilution can convert a true low-level contamination into a falsely alarming ‘action level’ count or it can mask a genuine issue by diluting it beyond detection.
Serial dilution is also a ‘problem solver’ for assays that can be inhibited by product matrices. Take endotoxin testing, a routine dilution is normally adequate. However, if recovery/spike or assay response fails at that dilution, the analyst may need to go further out (e.g., from 1/80 to 1/100 to 1/200) while maintaining method validity.
This is not limited to endotoxin. Any assay with a curve (such as cell-based, chromogenic, turbidimetric, immunoassay tests) can have a usable range where precision is better. Many methods explicitly link serial dilution scale selection to precision, noting that operating at the wrong end of a log-scale curve increases inaccuracy. The broader lesson is such that serial dilution is part of assay design and interpretation, not merely sample preparation.

Image: Avoiding dilution errors (designed by Time Sandle)
A simple way to teach serial dilution (especially for staff new to microbiology) is to frame it as three questions:
1. What range do I expect?
Use process knowledge, historical trends and where the sample sits in the process. High-risk points (open manipulations, upstream intermediates) justify broader dilution coverage.
Membrane filtration, pour plate, spread plate or rapid methods each impose different practical limits on sample volume, recovery and readability. Internal notes emphasise that membrane filtration has been pharmacopeia-recommended for decades and involves passing a sample through a membrane and incubating the membrane on media.
Confluent growth, inhibition, clumping or low counts each require a different dilution strategy. Overcrowding and clumps are explicitly called out as drivers of imprecision and difficult reading.
From here, the analyst should select a dilution series that ensures at least one plate will land in the readable range. Many labs use a default of 10 fold dilutions for unknown samples because it spans orders of magnitude quickly. For expected low bioburden samples, smaller dilution steps (or even neat testing with larger volumes) may be appropriate, but the analyst may still need a plan for the rare ‘surprise high count’ sample.
If you are writing training content (or coaching analysts), the most effective messages are practical and risk-based 5:
Serial dilution remains foundational because it solves a universal problem - converting messy biological reality into numbers that are closer to reality. This matters because numbers feed batch release decisions, contamination control actions and trend interpretation.
The serial dilution technique looks simple, but the quality of the result depends on good dilution design, rigorous execution, clear documentation and a practical understanding of how dilution interacts with method limitations like confluent growth and assay inhibition.
1. Sandle, T. (2021) 10 Critical Validation Parameters For Microbiological Experiments, Outsourced Pharma: https://www.outsourcedpharma.com/doc/critical-validation-parameters-for-microbiological-experiments-0001
2. McCullough, K.C. and Weider-Loeven, C. (1992): Variability in the LAL Test: Comparison of Three Kinetic Methods for the Testing of Pharmaceutical Products, Journal of Parenteral Science and Technology, Vol. 44: 69-72
3. Pharmaceutical Microbiology Resources “Increasing The Reproducibility Of Serial Dilutions”, 2016: https://www.pharmamicroresources.com/2016/03/increasing-reproducibility-of-serial.html#google_vignette
4. Sandle, T. (2024) Means, ranges and replicates: Improving microbial plate counting, LinkedIn Learning: https://www.linkedin.com/pulse/means-ranges-replicates-improving-microbial-plate-tim-kbxke
5. Sutton, S. (2006): ‘How Many?’, Pharmaceutical Microbiology Forum Newsletter, Vol. 12, No.6, pp1-10