Posted by Sten Westgard, MS
There's a frequent question we get about when and how to use a manufacturer's range (SD) and mean. We've been consistently pointing out the evils of the manufacturer's range and SD when they are adopted for routine QC usage (because the SD comes from a group of labs, is thus larger than an individual lab, and will cause control limits/ranges to be set too wide for an individual laboratory), but what happens when a laboratory simply doesn't know it's own mean and SD yet?
In other words, when you're just starting out - with a new method or instrument - and the only information you have is the manufacturer's mean and range/SD, what should you do?
An answer, after the jump...
First, let's acknowledge that this scenario should only happen very rarely. Usually the laboratory has access to plenty of data on a method, so they don't need to fall back on the manufacturer's range/SD and mean. It's only when you have a completely new method and you have no other information on it that you need to use the manufacturer data.
Second, let's also point out that, if the lab is dealing with a new method, either CLIA or ISO mandates that some form of validation or verification be performed. If those experiments are performed, the laboratory would in fact calculate out the mean and SD ofthe new method. So again, the laboratory should have access to a better estimate of mean and SD than what the manufacturer is providing.
In other words, despite the fact that this scenario shouldn't occur in the laboratory, we're going to move ahead and discuss how to use that manufacturer mean and SD/range when you're just starting out with a new method.
Using the manufacturer's mean and range/SD, you want to make sure that your emerging mean is falling somewhere within the range. It can be anywhere within the range, that's one of the reasons they give you this range.
However, during this start-up period, if you have some data points falling outside the manufacturer's range, you shouldn't reject those runs/values immediately. Because we haven't truly characterized the method, it could be that if we have a higher mean than the manufacturer's mean, some of our data points are going to fall outside the upper range limit. In those cases, we should examine each of those runs and determine if anything is truly wrong with the method. If there isn't something wrong, keep those data points and consider them "in." Use those data points, along with the others that are within the manufacturer's range, when you make the first calculations of mean and SD.
Why are we accepting values that are "out"? Because that manufacturer's range isn't really our own range, so we should use those control limits as suggestions, and violations of those limits as only warnings of possible problems. If we're too strict about eliminating data points outside the manufacturer's range, we might end up with a "real" SD that has a set of data points that are too narrow leading to control limits that are going to be tighter than they need to be.
I know this runs contrary to most of our teaching. Usually we are quite strict about keeping "out-of-control" data out of the SD and mean calculations. This is a very special case - and one that could easily be avoided if the lab runs a precision study during method validation. That way, they would have the estimates of mean and SD that are more reflective of the lab's performance, and those could be used instead of the manufacturer's range/SD and mean.
If there is no mean and SD available from the method validation
it can be useful to look at the data for intermediate precision (performance claim) in the documentation of the test. The VC given here is often much tighter than the VC/SD used to set the ranges in the QC insert.
Posted by: Hans van Schaik | February 27, 2014 at 08:25 AM