Posted by Sten Westgard, MS
Time for a few questions!
I was just reviewing some of the poster abstracts from the 2011 AACB (That's Australasian Clinical Biochemists) 49th annual conference and came across three posters in a row that described new HbA1c methods.
See if you can tell which is which.
1. Match the method data (A,B,C) to the abstract's conclusion (X, Y or Z):
Data | Matches to? | Study conclusion |
A. CV of 0.6%, bias at 6.5 of 3.5% | X. "The [method] gives acceptable assay precision." | |
B. CV of 3.5%, bias of 11.8% | Y. "Both the precision and bias are within the... allowable limits of performance." | |
C. CV of 2.6%, bias of 1.6% | Z. "[A]nalytical performance is good and satisfies essential performance criteria." |
Want to see the answers? (And more questions?) After the jump...
Here's the rest of the quiz, with answers following below.
2. Match the source of quality requirements with the specification set for HbA1c.
Source of Quality Requirement |
Matches to? | Quality Specification |
A. CLIA PT criteria |
V. No quality specified |
|
B. CAP/NGSP 2012 goal |
W.± 7% |
|
C. Rilibak EQA goal |
X. ±18% | |
D. Desirable specification for allowable total error based on within-subject biologic variation |
Y. ± 3% | |
E. RCPA QAP goal |
Z. ± 0.5 %HbA1c (units) when the concentration < 10% ± 5% when the concentration > 10% |
3. If the Desirable specification for allowable total error is used as the quality requirement for these 3 HbA1c methods, how would we judge the methods?
- A. All methods are acceptable
- B. No methods are acceptable.
- C. Method A is acceptable, but Methods B and C are not
4. If the Rilibak specification for EQA is used as the quality requirement for these 3 methods, how would we judge the methods?
- A. All methods are acceptable
- B. No methods are acceptable.
- C. Method A is acceptable, but Methods B and C are not
5. If the CAP/NGSP specification for 2012 is used as the quality requirement for these 3 methods, how would we judge the methods?
- A. All methods are acceptable
- B. No methods are acceptable.
- C. Method A is acceptable, but Methods B and C are not
6. Based on the findings here, what can we conclude?
- A. Method validation studies usually conclude that the methods are acceptable, regardless of actual performance
- B. Quality requirements vary widely depending on the source chosen
- C. Different quality requirements will produce different judgments of method acceptability
- D. Care must be taken in evaluating method validation data found in the literature - and care must be taken when calculating Sigma-metrics
- E. All of the above
Answers:
1. The table rows are exactly correct.
2. The table rows are exactly correct.
Questions 3 through 5. 3.B. 4.A. 5.C. Using the performance data and quality requirements, we can calculate sigma-metrics [Note: we used imprecision estimates near 6.5% HbA1c and calculated bias at that level, using the regression equations provided in the studies]:
Method |
CAP/NGSP Sigma-metric |
Desirable spec for TEa from Biologic Variation: Sigma-metric |
RCPA Sigma-metric |
Rilibak Sigma-metric |
A. |
5.8 | cannot meet this goal |
7.0 | 24.1 |
B. |
cannot meet this goal |
cannot meet this goal | cannot meet this goal | 1.8 |
C. |
2.1 | 0.5 |
1.8 | 6.3 |
6. E. All of the above. It's a bit disheartening to see such variation, not only in performance of these new methods, but also in the quality requirements from different regulators and sources. The only thing that seems to be uniform is that the study authors always seem to conclude that the method they studied is acceptable (this is sometimes known as confirmation bias - we tend to find what we are seeking).
Let's hope you didn't fail the quiz as badly as some of these methods fail in their performance!
Data sources.
From the 49th annual AACB conference - poster abstracts:
- P67. Evaluation of Bio-Rad Variant II Turbo 2.0 HbA1c kit
- P68. Analytical Evaluation of the Quo-test HbA1c analyser
- P69. Cobas C111 - Evaluation of HbA1c performance and suitability of use in a clinical setting
I have also noticed, even at AACC, that these types of poster presentations usually conclude that the methods perform well regardless of their actual performance. In some cases it may be because the studies are sponsored by the manufacturer of the method being evaluated. The Rilibak EQA goals are quite wide probably due to their use of processed materials in their surveys; there is a lot of variability due to matrix effects. Not very useful.
Posted by: Randie Little | January 24, 2012 at 03:28 PM