Posted by Sten Westgard, MS
Here is the next set of questions that were submitted during the first webinar to India.
From Indonesia: "In my country we still use three sigma management, so how do we know the exact cause of every lapse of the "Westgard Rules"?"
It's interesting that you characterize your management at a particular sigma level. I would argue that regardless of your instrument, you can only practice the level of management that your instrument provides. If you have a three sigma instrument, you can't practice six sigma management. Also, regardless of the sigma of the instrument, the violations of the "Westgard Rules" are still helpful to indicate what type of error is causing the problem. A 1:3s or R:4s violation is likely to be caused by a random error. A 2:2s, or 4:1s, or 8:x is likely to be caused by a systematic error. The "Westgard Rules" are not omnipotent - they can not tell you exactly what the source of the error is, but they can get you started in the right direction.
From Hyderabad: "What is the Sigma-metric?"
The Sigma-metric is a number that gives you an estimate of the analytical performance of the test method. Six Sigma is considered world class performance and has nearly zero defects out of a million reportable results. Three Sigma is considered the minimum acceptable performance in many industrial and manufacturing applications, and generates close to 67,000 defects out of a million reportable results. Knowing the Sigma-metric helps you understand the confidence you can place in the results, as well as guidance on what rules to you, how many controls to run, and (soon) how often you need to run QC. There is a wealth of information about Six Sigma on Westgard Web, as well as many other places on the internet.
From India: "What should we do when we have positive or negative bias."
From a strict metrological perspective, any time you discover you have bias, you are supposed to eliminate it. For the utility of measurement uncertainty propagation, you cannot have bias present. However, in the real world, some bias can exist without harming patients, so it's a matter of determining the allowable bias. We encourage the use of the Sigma-metric, and if the bias is causing the Sigma-metric to fall to an unacceptable level, then consider recalibrating the method, or finding out what is causing the bias, shift, or drift. This may mean getting a new set of calibrators, or a better reagent lot, or getting some maintenance done - there are many reasons why an instrument might develop a bias.
From Jakarta: "Can I use bias and CV from my daily QC for calculating the Sigma-metric?"
My sense of the question is that the answer here is Yes - your Sigma-metric should be based on your daily performance. You can always calculate your CV from your daily QC. Getting bias from your QC may involve joining a peer group, or it may involve using an assayed control. The bias calculated from a peer group is better than a bias calculated against the assayed/target mean, but either estimate is better than nothing. Ideally, we would like to get bias from a comparison against a reference method, reference material, or a PT/EQA survey that is accuracy-based. Those ideal options are often completely out of reach for the usual laboratory. So using peer group or assayed means to determine bias is a practical alternative.
From Indonesia: "Is it common to use Six Sigma QC Design in hematology QC? How many % are they?"
This is an interesting question. The fact is we don't really know how many labs are implementing Sigma-metrics in either their chemistry or hematology. Our surveys have indicated somewhere between 10 to 25% are using Sigma-metrics in their operations. So it's still a small minority of labs, at best. It certainly is more common in chemistry than hematology. Many of the hematology analyzers are not even set up to display the proper QC charts, so the very foundation on which you implement Sigma-metrics isn't present. The short shelf life of hematology controls is another complication factor. Finally, the TEa goals for hematology are tighter and getting to Six Sigma may therefore more of a challenge. We have posted numerous case studies of hematology instruments but from what we hear, it is less likely that a lab implements Sigma-metrics in hematology than chemistry. But despite all that, we know labs do implement them on their hematology methods - we even have a chapter in one of your recent publications.
From Delhi: "We actually don't know how to implement practically Six Sigma in our lab? Is there any formula to successfully implement it?"
Of course, there is a Six Sigma formula, but I take this question as metaphorical, not literal. The Six Sigma QC Design textbook we publish contains some helpful discussions about how to implement the technique. The Basic Quality Management Systems book we publish discusses how to make system-wide changes to the laboratory. And finally, our Sigma VP program is designed to help labs execute a step-by-step implementation of the approach, with the final result being verified by our own evaluation. So I do hope that we have provided not only the literal formula, but the metaphorical formula as well.
From Jaipur, India: "Are Total Allowable Error published by CLIA 88 acceptable? Should we incorporate them in our calculations?"
This is a very common question. CLIA criteria are thought of as "too wide" for many applications, but that is often in the context of only trying to achieve the equivalent of 2 or 3 Sigma. When you try to use CLIA goals for Six Sigma, you find that these CLIA goals aren't all that wide. Nevertheless, there are some CLIA goals that may be too wide for today's high performing methods, and it may be possible to use tighter goals. This is something we are now doing with our Sigma VP program, evaluating the goals where CLIA may be practical, and finding other goals like Ricos or RCPA goals when method performance is quite good.
I guess the short answer to this question is "Yes, use the CLIA goals when you start your Sigma-metrics" but in the future, you will probably want to use goals from multiple resources, not just from CLIA.
From Bahrain: "If the testing volume is low, can we run only one level of control?"
The number of controls you should run is based not on your testing volume, but on your testing performance. However, there are tools in press this year that will help you base how often you run those controls on the Sigma-metric and adjust it for the patient volume. So a Six Sigma method may be capable of running just a few controls (2 or 3) for every 1,000 patients (or even higher). But a Three Sigma method may need 4 or 6 controls, and all the Westgard Rules, and may need a frequency of once every 50 or100 (or even lower) patients. So a lower volume of testing, when there is a high Sigma method, may be able to reduce their QC frequency. However, there is usually a minimum QC frequency imposed by regulations. For example, CLIA in the US for chemistry methods mandates a frequency of at least two controls twice per day. So even if we have performance that's quite good, we'll still end up running once per day. There is the possibility of implementing an IQCP, which gives you the justification for reducing frequency to less than once a day, but many methods will no doubt default to once a day, because the methods are not stable for longer than that time period (they require start-up, calibration, etc. once per day).
From Mumbai: "How does Sigma impact the number of controls run and the number of rules implemented?"
This is an easy question to answer. The higher your Sigma, the fewer controls you need, the fewer rules you need, and the less often you have to run QC. The lower your Sigma, the more controls you need, the more rules you need, and the more often you will need to run QC. Our QC application section has more than three dozen case studies of this.
From Indonesia: "How many times should we calculate Sigma?"
Once the Sigma-metric is implemented and performance is stable, you generally can review it quarterly or every six months. For our Sigma VP labs we recommend looking at it every 6 months and we mandate that we verify their performance once a year. Of course, common sense says that if you have problems with your method, you should recalculate and re-evaluate the Sigma-metric as well. Whenever there is an out-of-control event on a method, you should evaluate whether or not the Sigma-metric needs to be re-established. Whenever major maintenance is done, that's also a time to re-calculate the Sigma-metric. So while you might as a rule only need to evaluat it every quarter or six months, there may be circumstances which require it to be calculated more frequently.