Posted by Sten Westgard, MS
Attending the AACC/ASCLS convention predictably results in one frustration. Walking through the poster sections, you find that many of the method validation abstracts are only performing within-run precision studies.
What's so bad about that?
Repeat after me: it's not about the repeatability...
Just to recap, there are a number of different precision estimates you could report:
- Within-run imprecision. Sometimes called repeatability. Without question, this is the easiest study to conduct. Within a single run, you measure the results of 10 to 20 samples.
- Total imprecision. Sometimes called intermediate precision. This estimate is typically the result of a longer study, like the one described by the CLSI EP5 guideline, perhaps 2 runs a day for 20 days.
- Routine, historical imprecision. Performance that is measured over a longer period of time. Sometimes called cumulative coefficient of variation (%CV). Usually, this represents the summarized data of several months of routine control data. The CLSI C24 guideline recommends three to six months of routine data for a calculation of %CV.
The problem with reporting within-run imprecision is that it reflects a narrow window of actual performance. It doesn't take into account the variation a laboratory will routinely encounter in the changes in runs, shifts, reagents, control lots, even varying operating conditions during the day and night. So it often provides a very optimistic number.
That doesn't mean that you can't make use of a within-run imprecision estimate. You can use that number to rule out a method - if the within-run CV% is unacceptable, the imprecision is probably not going to be any better in routine operation. But you can't assume that a good within-run number is going to mean the method is ultimately acceptable.
The frustration comes from the fact that this is a known weakness of within-run studies. What's even worse, many of these studies are funded or performed by very large diagnostic manufacturers, organizations that have enough time and resources to conduct longer studies, and know that a within-run estimate isn't sufficient.
The whole point of a method validation study is to provide some evidence that a method is acceptable. By skimping on the precision study, the usefulness of the results is diminished for all the people who might want to take advantage of the study.
Admittedly, it is understandable that some of the laboratories conducting these studies have limited resources and not much time to produce the estimate. But most of the studies end up being supported by a manufacturer, who should be able to support a longer study.
Also, it is certainly understandable that diagnostic manufacturers want to present the most optimistic numbers. Every manufacturer wants to put their best face forward. But the posters and abstracts are supposed to be scientific articles, not marketing pieces. Even with cars, we've at least reached the stage where car makers have to report estimated Highway and City miles per gallon. Can't our scientists provide that level of service?
Here's the question I'd like to pose: If manufacturers spend years researching, engineering and building a brand new instrument, shouldn’t the evaluation study be more extensive than one day to demonstrate the performance of the new system?
End of rant. Apologies for the curmudgeonly tone. (I’m beginning to sound like my father.)
Comments