Posted by Sten Westgard, MS
Two members posted questions to us about the 4:1s control rule. The first question is about HOW to interpret the 4:1s rule, and the second question is about WHEN to interpret the 4:1s rule.
Let's get to the first question, and look at it graphically. Do you think this is a violation of the 4:1s rule?
It's hard to see the difference in this next graph, I know. But they are slightly different points. Does that change the picture at all? How about this?
Finally, in this graph, it's clear. That point is indeed above the line, the data is exceeding the 1s control limit, and the 4:1s control rule has been violated.
Now here's the truth. In these three scenarios, the z-values for the last data point were very similar: 1.01, 1.03, and 1.2. By z-values, we know that all the data points were in fact out past the 1s limit.
The basic answer to the question of how to interpret the 4:1s rule, is that each data point must exceed 1s. If the z-value was just 1.0, that data point is at 1s, but not exceeding it, that would not be considered a violation of the 4:1s control rule. This might seem a little like splitting hairs, or arguing about how many angels fit on the head of a pin, but remember that software programs are built with explicit rules like this. To a charting program, there is a big difference between 1.01 and 1.0.
However, think of the data rounding that probably takes place in your software, whether it's on the instrument or on an LIS or middleware program. Would your software round a 1.01 down to 1.0? Would it round 1.03 down to 1.0? How about a result of 1.2 - would that get rounded down as well?
So, to answer the first question: if you have a data point that is on the 1s control limit, it's not a violation of the 4:1s rule. However, if you even suspect that there is significant data rounding, you might want to treat a data point that's "on the line" as a data point that might be exceeding that limit.
Now the next question: WHEN do we use the 4:1s rule?
In the manual implementation of the "Westgard Rules", a "2s warning rule" must be violated before you begin using the other rules. In other words, if the "2s warning" has not gone off, you never even look for 4:1s violations.
But recall that in 1981, when the original formulation of the "Westgard Rules" came out, QC was mainly being done by hand. It was laborious, time-consuming, and we needed to free up tech time for other things. These days, it's a different story. Now that QC is routinely being handled by software, and the software implementations of QC charting don't mind checking all the rules all the time.
Thus, for today's automated, modern instruments, we encourage the implementation of a "Westgard Rule" combination that doesn't use a "2s warning rule." In fact, don't use a warning rule at all, just implement and check all the rejection rules continuously. [You can download worksheets with these "Westgard Rules" recommendations in worksheet format.]
On final note, though: some tests are of such high performance (better than Six Sigma) that they don't actually need the use of the 4:1s rule. So in addition to making a recommendation that you don't wait for a "2s warning" before using a 4:1s rule, we also advocate that you design your QC and only use the control rules that are necessary to acheive the quality required by the test. For some tests, only single rules are needed, and no 4:1s rule needs to be used.
Keep those questions coming in!
WAIT-UP DECISION
Hassan Bayat; DCLS; Sina Lab
June, 05,2013
Based on thoughts from two Westgards’, when a look-back rule is broken, e. g. a between-run 4:1s rule, it means that a systematic error has happened since 4 runs before and has continued until now. If such a systematic error makes the last run out-of-control, then the previous 3 runs have to be judged as out-of-control, too; and we SHOULDN’T have released those previous 3 runs!
Therefore, I think we’d better to call across-runs rules “WAIT UP” rules and treat them in such a way. Meaning, when we are using a look-back rule, and confront with the first point violating the limit, we have to suspend reporting that run until the QC results from coming runs are compiled and we are able to decide whether a look-back rule is broken or not.
Example:
Suppose we are using a multi-rule SQC containing 41s rule. If C2 result in run 12 is 1.3s, we have to suspend reporting 12th run and wait for coming QC results; i.e. our decision must be “WAIT UP!” Then after performing coming runs, we could be able to decide to accept or to reject 12th run. If:
- 13rd run: C2 > 1.5s; Decision: Still Wait, ALSO HOLD “13rd run”!
- 14th run: C2 < -1.6s; Decision: 41s rule isn’t broken for 12th and 13rd runs; report that runs, But hold 14th run for coming runs!
- AND SO ON!
What if we have to hurry?
We have two options:
1) When our QC is out of look-back limit, we have to perform next run(s) immediately, even just running the control(s), to reach somewhere that we are able to judge about the first violated run. Of course, such an approach could be laborious and expensive.
2) An alternative solution could be having some kind of Multi-design SQC based on the statuses we use SQC for:
- One design for routine status, using look-back rules; and
- One design for STAT status, without look-back rules.
For example:
- “13s/22s/R4s/41s; N=2; R=2” for routine application, and
- “13s/22s/R4s/41s; N=4; R=1” for STAT application.
Maybe after “Multi-level SQC” and “Multi-stage SQC”, the mentioned approach could be 3rd kind of multi-design SQC, and could be called “Multi-status” design.
Posted by: Hassane Bayt | June 17, 2013 at 06:58 AM