Posted by Sten Westgard, MS
Sunday, July 15th marked the 37th (or 38th, depending on how you count) consecutive session of the Concepts and Practices in Method Evaluation workshop. This is the workshop started long, long ago by "the" Westgard and colleagues like Dr. Neill Carey, Dr. Carl Garber, and others. Most of the "founding fathers" of this workshop are still active in the field, but have "retired" from the workshop. It's still the longest-running workshop that AACC offers at the conference - proof that this remains a chronic need in the laboratory.
For the last three years, I have had the honor and pleasure of being a small part of this workshop. I am lucky to be working with two extremely knowledgeable colleagues, David Koch of Grady Memorial Hospital and Emory University, and Dan Hoefner of Health Diagnostic Laboratory of Richmond, Virginia. They have a deep well of real-world knowledge to match the theoretical structures of method evaluation.
Earlier this year we had a great example of why method evaluation skills are still needed. Vitamin D methods are hot, but figuring which method is good is tough. A paper in Clin Chem noted:
“The use of summary statistics in method comparisons, particularly the correlation coefficient…can disguise a marked variance in the results…. [E]rrors undermind the confidence in the veracity of all results.”
From: State-of-the Art Vitamin D Assays, by Farrell et al, in Clinical Chemistry
Many people are still using the "r" as a single statistical judgment on acceptability. For decades, the method evaluation concepts workshop has been trying to eradicate this practice. Good to see that the literature occasionally agrees.
Thanks to allwho attended this year's Method Evaluation workshop. We hope to see more of you next year.