Posted by Sten Westgard, MS
One more shot at error rates! At the IFCC Berlin conference, there was an intriguing abstract about the use of Quality Design/Planning tools in the laboratory:
Abstract #1062: Efficiency of Analytical Qualit yControl with Various Quality Planning Tools in Thai Clinical Laboratory. K. Sirisali, S. Manochiopinj, S. Sirisali.
How high do you think out-of-control rates can go?
In this study, six clinical laboratories looked at six enzyme assays. Three QC approaches were used:
- the traditional 2 SD limits,
- QC Design by Sigma-metrics, and
- QC Design by OPSpecs chart.
The error rates they found were as follows:
- the traditional 2 SD limits: 28.66% out-of-control rate
- QC Design by Sigma-metrics: 11.15% out-of-control rate
- QC Design by OPSpecs chart. 9.08% out-of-control rate
The authors note that the Sigma-metric and OPSpecs chart differences were not statistically significant. However, clearly a QC Design approach more than halved or cut by two-thirds, the rejection rate caused by the traditional 2 SD limits.
Compare these rejection rates to other studies of laboratory error rates. How is it that some studies report only a 0.009% QC rejection rate. What's going on? Clearly, error studies are not measuring analytical error rates in the same way. If the QC procedure is not customized to the quality required by the test, or if control limits are not verified as appropriate to the test, the out-of-control error rates may not have any relationship to the actual errors being generated by the test method. We might be "in control" according to our corrupted QC limits, and at the same time, we are spitting out a lot of noise into the test results and clinical decisions.
So when it comes to this type of study, Caveat error.
Comments