Posted by Sten Westgard, MS
A recent article about the experience of QMS tracking in a clinical laboratory in Nigeria raised some interesting questions about sources of errors in "developing world" laboratories.
Where would you expect a Nigerian Human Virology laboratory (HVL) to experience the most problems?
- pre-analytical processes
- analytical processes
- post-analytical processes
The answer, after the jump...
The journal article of interest is published in the African Journal of Laboratory Medicine:
Experience of quality management system in a clinical laboratory in Nigeria, Audu RA, Sylvester-Ikondu U, Onwuamah CK, Salu OB, Ige FA, Meshack E, Aniedobe M, Amoo OS, Okwuraiwe, Okhiku F, Okoli CL, Fasela EM, Odewale EO, Aleshinloye RO, Olatunji M, Idigbe EO, African Journal of Laboratory Medicine, Volume 1, Issue 1, 2012.
They tracked error rates for three years, from 2007 through 2009, for indicators in all phases of the total testing process.
Step of TTP | Performance Indicator | 2007 | 2008 | 2009 |
Pre-analytical | Sample rejection | 35 [0.15%] | 15 [0.06%] | 28 [0.11%] |
Wrong sample type | 88 [0.39%] | 54 [0.22%] | 19 [0.08%] | |
Lost sample | 71 [0.31%] | 12 [0.05%] | 14 [0.06%] | |
Analytical errors | Unanalyzed samples | n/a | 76 [0.31%] | 2 [0.01%] |
QC violation | 4,868 [21.58%] | 1,756 [7.2%] | 1,465 [6.01%] | |
EQA compliance | 98.3% | 96% | 95% | |
Post-analytical | Delay of results | 300 [1.3%] | 369 [1.51%] | 4,294 [17.6%] |
Data entry error | 158 [0.7%] | 12 [0.05%] | 15 [0.06%] | |
Total # samples | 22,559 | 24,392 | 23,478 |
So our answer to the quiz question? for 2007 and 2008, the largest source of errors came from the analytical portion of the total testing process. Was this what you expected?
Another surprise, perhaps, is the relatively low number of pre-analytical errors. There are a lot of other studies out there in more "developed" laboratories where those metrics are a lot lower. Perhaps this is a result of the lower volume in this laboratory. Smaller laboratory, smaller volume, smaller staff, perhaps this allows more focus and less opportunity to error.
Let's convert these numbers into Sigma-metrics to give us a universal benchmark:
Step of TTP | Performance Indicator | 2007 | 2008 | 2009 |
Pre-analytical | Sample rejection | 4.5 | 4.8 | 4.6 |
Wrong sample type | 4.2 | 4.4 | 4.7 | |
Lost sample | 4.3 | 4.8 | 4.8 | |
Analytical errors | Unanalyzed samples | n/a | 4.3 | 5.3 |
QC violation | 2.3 | 3.0 | 3.1 | |
EQA compliance | 3.7 | 3.3 | 3.2 | |
Post-analytical | Delay of results | 3.8 | 3.7 | 2.5 |
Data entry error | 4.0 | 4.8 | 4.8 | |
Total # samples | 22,559 | 24,392 | 23,478 |
One final surprise is the dramatic shift in the test delay "errors" in 2009. In 2007 and 2008, test delays were below 2% of the tests. But in 2009, there's a spike in delays. Now is this really an increase in errors, or a change in expectation? The tricky thing about turn-around time is that it's all built on an assumption of how long a test should take to be reported. As clinicians grow accustomed to getting test results, they also develop more demands and higher expectations. So it might be that no more "errors" are occurring in 2009, there's just a surge of clinician impatience.
If we exclude the expectation-based TAT metric, analytical QC failures are the largest source or errors for this laboratory all three years running. For those who uphold the dogma that analytical errors are the smallest problem for laboratories, here's a direct refutation of that thinking.
Comments