Posted by Sten Westgard, MS
I had the pleasure of taking part (albeit remotely) in the Quality at the Crossroads conference in Alexandria, Egypt.
As part of my session, we took questions from the audience, and I thought I would share a few of them with you, as well as a few of the "wrinkles" that labs in Egypt experience that we in the US do not. So here are the questions:
- How frequently should I measure Total Error and Measurement Uncertainty?
- Can I resort to comparing my EQA result to allowable bias when EQA result is violated and declared incorrect due to tight SD of comparator group?
- When using a new manufacturer’s QC, we should establish our own mean and CV, but it would result in tight CV and shift after 20 days. What are the recommendations for this? Is it better to wait more days to establish CV?
Some answers, after the jump...
First, let's answer question 1:
How frequently should I measure Total Error and Measurement Uncertainty?
Let's recall what we use Total Error for - to judge the acceptability of methods. So how frequently do we want to know about the acceptability of our methods? Probably very frequently.
In practice, however, we may only calculate Total Error once a month (and review it along with our usual QC results). We may only look at it once a quarter or once every six months, if the method performance is very stable. That said, any time there's a serious change, maintenance, problems, etc., we should revisit that calculation. If the instrument breaks down, just as we would re-establish the mean and SD of the methods, we should also re-establish the Total Error.
Today we more often recommend that labs replace their Total Error calculations with Sigma-metric calculations. That is, calculate your Sigma-metric, monitor it monthly, quarterly, or bi-annually, depending on the stability of the method. And whenever there's a major event, re-calculate the Sigma-metric.
Now, measurement uncertainty is another story. It's a different metric, more related to the traceability of the method than the day-to-day operation of the lab. I have seen labs where the measurement uncertainty is calculated once, when the method is first validated, and then hardly ever again. I have seen other labs where the measurement uncertainty is never calculated - this is most common in the US, where there is no mandate for this calculation and little desire to add it to the lab's quality indicators. I have also seen labs where they are required to calculate measurement uncertainty, since they are ISO 15189 accredited, but then they stick those calculations in a drawer that isn't opened except for the inspector.
While measurement uncertainty can be helpful at the manufacturer level, and in the initial selection stages of instrumentation for the laboratory, on a routine level it's often not very practical or useful. When we conducted a global survey of labs and their use of measurement uncertainty, and found that the vast, vast majority do not use it at all.
But, as regular readers know, we have a bias against certain uses of measurement uncertainty.
Question 2: Can I resort to comparing my EQA result to allowable bias when EQA result is violated and declared incorrect due to tight SD of comparator group?
There are a whole lot of issues that come up here, some of them that have nothing to do with the US regulations. In the US, if you fail a PT event, there is no way around it. You can make a different comparison, but you're still on the hook for that PT failure.
My understanding of EQA results in other countries, particularly in the middle east, is that the logistical challenges of transporting EQA specimens in hot climates add an additional degree of difficulty to the EQA process. In some countries, the EQA results are not often trusted due to this very fact. In other cases, the EQA results are only "educational" so if a laboratory fails an event, there are not major implications.
In the scenario of this laboratory, it would be prudent to try and understand why the method is failing EQA, even if the bias is within an allowable bias specification from another source. If the EQA program is consistently generating results that are not useful, it's also a good idea to find another EQA survey to join.
Finally, Question 3: When using a new manufacturer’s QC, we should establish our own mean and CV, but it would result in tight CV and shift after 20 days. What are the recommendations for this? Is it better to wait more days to establish CV?
Compared to a manufacturer's range and SD, the laboratory's own SD will always be smaller and tighter. But having a tighter SD isn't a bad thing, if you understand how to set up QC properly. Many labs are still stuck on the old "2 SD" rule, and so a smaller SD means to them, that they have a smaller 2 SD range to set up on the their control charts. But we recommend the Sigma-metric approach, where a smaller SD translates into a higher Sigma-metric, which therefore means you can use wider control limits, 3 SD and higher for assays of high quality. So having a smaller SD is a positive when you use the data correctly.
Ultimately, you do want 3 to 6 months of data to get the best estimate of SD. With every new month of data, update your mean and SD.
Comments