Posted by Sten Westgard, MS
We’ve presented a number of discussions on the uncertainty of measurements on this website, including our series on the “War of Words” and also Dietmar Stockl’s more rigorous presentation “Time to engage in measurement uncertainty.” A central part of these discussions in a debate about the merits of measurement uncertainty vs those of total error. Rather than rehash the technical points of the argument again, we're going to step outside of our laboratory world for a moment. By looking at another contentious debate, perhaps we can get a better perspective on our own entrenched positions.
It will only take a second...
A digression on time - only a second!
A few months ago, we were all supposed to adjust our clocks. You might have missed it. If you blinked, you certainly did.
It was a leap second.
Why a leap second? What's the reason for this? It's not because there's anything wrong with our clocks. Indeed, our clocks have become so good - particularly cesium atomic clocks, which are used as the gold standard for measuring time - that they are not expected to lose a second for millions of years. Clocks are no longer the problem.
Instead, it's the Earth. The Earth, as it travels around the sun during is 365 and 1/4 day cycle, has been slowing down. The moon's tidal forces slow down the Earth's spin by 2 milliseconds per century, and, to be honest, planets slow down as they age, due to friction and diminishing kinetic energy (Earth is over 4.5 billion years old, for those of us who live outside Kansas). So we needed to add the extra second to account for the longer amount of time it was taking for the earth to rotate.
In other words, the time problem for clocks has come full circle.
In the old days, we adjusted our clocks because our clocks were imperfect. They couldn't keep up with the passage of time on Earth. So when their gears slowed down and their errors accumulated, we readjusted them to fit the time that it actually was on Earth.
In today's world, however, the perfection of the time-keeping instruments has exceeded that precision of the Earth. So now we adjusting our clocks because the Earth is imperfect.
This new situation - where the clocks keep better time than the Earth - has produced a new argument among scientists. There are those who want to abandon the leap second and any such adjustments that are based on the imperfect rotation of the Earth. Instead, they want us to adhere to the perfect standard of the atomic clock. This group, call them the absolute time-keepers, also have a more practical argument: adjusting all the clocks, particularly in satellites and other sophisticated time-dependent devices, can be expensive.
The other side of the debate, call them the everyday time-keepers, argues that we live on Earth, and it only makes sense that we adjust time to reflect our experience here on Earth. A day isn't a pure expression of time; it's a practical measurement of one rotation of our planet. Decoupling our time-keeping from the life we lead may be methodologically rigorous, but it's also unreal.
The point of this essay isn't to resolve the time debate, and indeed, the debate is not finished yet. We're here to draw parallels between this situation and our own in the laboratory. So we leave the issue of time now, and return to our laboratory world.
An uncertain world!
The metrology purists have been advocating, more and more strenuously, that we must express deviations and variations in laboratory processes exclusively in terms of measurement uncertainty. The advantage of measurement uncertainty is that the estimates can be combinable. That is, you can take uncertainty from one level of the process, move up to the next level of the process, and combine those estimates of uncertainty.
The other part of the argument is that by condoning the existence of total error and allowable total error, we indirectly condone the existence of bias. If we attack total error and eliminate its use, we will somehow deliver a message to manufacturers that they must provide analytic systems that provide comparable test results. This reminds one of a tail wagging the dog. You don't eliminate a problem by eliminating the word that describes the problem. That only works in 1984.
An essential part of this argument is that bias should be eliminated or corrected. This begs the question, bias against what? Too many tests in the marketplace have no reference method - no true or traceable value in effect - and the differences between methods are significant and troublesome. Even tests that have been on the market for decades still lack standardization. Both camps agree that this is a real problem, and that the solution is to press for more standardization and harmonization. But in the meantime, an embrace of measurement uncertainty overlooks the inconvenient reality of bias. In the real world of the laboratory, bias exists and it's not easy to determine and it's even harder to eliminate. Total error allows you to determine if an existing bias is acceptable or unacceptable.
The metrological purism approach is attractive. It's rigorous. There's a foundation of science behind it. It eliminates any problems with the theory by requiring the elimination of the problems.
The total error approach is also attractive. It's practical. It works with the problems laboratories face in the real world and doesn't require them to vanish in order to work. There's also a significant body of work behind the use and application of total error in the laboratory.
Do measurement uncertainty and total error both have a place in managing analytical quality in the laboratory? Is coexistence possible? For manufacturers and reference laboratories, measurement uncertainty can make a valuable contribution in understanding the sources of variation in a process, thereby facilitating the reduction of errors and the improvement of quality. In medical laboratories, total error can assist the analysts and technologists in making pragmatic choices about methods, performance and quality control. In the doctrine of ISO, however, medical laboratories are required to determine the uncertainty of results “where relevant and possible.” Those labs seeking ISO accreditation will have to comply with those standards, practical or not.
The greater tragedy is while we argue over the best approach to tackling error in the laboratory, the laboratory is capsizing under proliferating errors. Too many labs, particularly in the US, have become de-skilled and de-sensitized to errors in their processes. They have no concept of the performance of their testing processes - or of how good performance should be for their tests. Two main features dominate the laboratory landscape - speed and cost. If a test is cheap and fast, the quality is often not examined critically. As for quality control? Many laboratories are performing compliance QC, which is usually inadequate, or possibly the labs are doing Equivocal/Eliminated QC (known as EQC). For them, total measurement uncertainty is a reality. No one knows the performance of methods, not the labs, nor the clinicians, nor the patients, and certainly not metrology purists or total error proponents.
I am not an expert in this so please bear with me. Why must one choose either/or? It seems as though the clearly best option is to measure both at production stages and in total at the end. I'm currently building a wooden deck and we would be fools to put away the level, tape measure, and spacing tool mid-job. But it must also fit in total at the end too. Amateur carpenters rebel against the constant measuring but they either learn to do it or they hire someone who does. Seems like carpenters may work under more stringent QC standards than crime labs or doctors if I'm really understanding this debate.
Posted by: Thomas Westgard | July 31, 2009 at 09:14 PM
"Too many labs, particularly in the US, have become de-skilled and de-sensitized to errors in their processes."
I don't think this is at all a problem particularly limited to the US.
Here in the UK, I'm yet to meet another Health Care Scientist who understands what a Power Function Curve is and why it's so important, let alone using different QC rules (properly) to ensure different Total Errors.
In fact, at a recent EQA user-group meeting, the audience of some 100 UK laboratory professionals were asked who uses Total Error in their labs. Not a single hand was raised.
Posted by: andybiochem | August 24, 2009 at 06:46 AM