Posted by Sten Westgard, MS
Now that we know EQC will officially be phased out and instead Labs will have to develop QC Plans through Risk Analysis (as explained in CLSI's new guideline EP23A), some of the waiting is over. EQC, which was fatally flawed from the start, is going to go away.
However, the exact regulations about QC Plans and Risk Analysis have yet to be written (or, at least, are not yet known by the general public). What makes this more uncertain is that EP23A is only meant as a guideline, and the Risk Analysis approach discussed in the guideline is only meant as a possible example. Risk Analysis is a long-established technique (outside the medical laboratory) and has many different formats and levels of complexity. Even between EP18 and EP23, there are discrepancies between the Risk Analysis recommendations (EP18 recommends a 4-category ranking of risk, while EP23 recommends a 5-category approach).
So while we're waiting for the other shoe to drop (in the form of detailed regulations and accreditation guidelines governing Risk Analysis), we might as well talk about what questions those rules will have to answer...
1. What is the minimum composition of a Risk Analysis project?
The typical approach described in the literature involves forming a committee, mapping out a process, identifying hazards and failure modes, scoring those risks, prioritizing the risk scores, making improvements, remediations, and/or corrective actions, and finally re-assessing the risks and determining the acceptability of those residual risks. So how much of this process will need to be done? How much of this will need to be documented and in what formats (on paper, stored electronically?) For example, if a laboratory claims it has performed Risk Analysis, but upon inspection, there is no process map, is this a violation? Does this invalidate any QC Plan that is in use?
2. What is the minimum size of the committee that performs Risk Analysis?
In the literature on Risk Analysis, the make-up of the committee or group that performs the analysis is critical. It's supposed to have a diversity of roles and experience. For example, a group might want to have a clinician as well as a technician, so that the patient needs as well as the analytical details are covered. But not every laboratory will be able to have a large committee, so how small can it get? If the laboratory directory sits alone in his or her office alone and makes all these decisions, is that acceptable? Similarly, if the laboratory director delegates these tasks to a "committee of one" (say, a Point-of-Care Coordinator) to do all this work, is that acceptable? Will Risk Analysis conducted by a single-person be considered in compliance?
3. What is the minimum number of hours that should be spent performing Risk Analysis?
Does Risk Analysis take five minutes, five hours, or five days to complete? If one hospital lab spends several days performing Risk Analysis, but an office lab spends just an hour, will both of them be in compliance? Do the hours matter less than the final documentation? (process maps, risk scores, remedial actions, etc.)
4. What is the minimum number of failure modes?
We expect that individual laboratories will vary in the the hazards present in their processes. An office lab will have different failure modes than a core laboratory in a large hospital, even for the same process or instrument. But there must be some overlap - some failure modes and hazards are surely present in all settings.
If one laboratory. through their Risk Analysis, identifies only one failure mode for a particular process, will that be considered a violation? If it is known that that other labs have found more failures modes with this process, is the laboratory with fewer failure modes still in compliance?
5. What is the minimum number of risk factors that must be considered in Risk Analysis? 3, 2, 1?
Outside of healthcare, sophisticated Risk Analysis involves three dimensions or factors: Severity, Occurrence, and Detection. However, the CLSI guidelines show examples that drop the third dimension (Detection). There is a debate over whether QC, which is about detecting errors, should use a process that doesn't explicitly take detection into account. As stated earlier, the CLSI guidelines show examples that are not binding - so the fact that their examples recommend 2-factor models does not prevent laboratories from using 3-factor models. We assume that CMS and CLIA will synchronize with the CLSI guidelines and mandate 2-factor models, but it is possible that a more rigorous standard could be enforced by the regulations.
But going in the opposite direction for a moment, what if a laboratory does less than a 2-factor model? What if a laboratory uses only one dimension in their analysis, or even just a binary state: ("Yes this risk is controlled" vs. "No this risk is not controlled"). Will that still be in compliance?
6. What is the minimum number of categories (ranking scale) within each factor? 3, 4, 5, 10?
Different Risk standards recommend different numbers of categories for each risk factor. A 3-step ranking scale is considered qualitative. A 5-step ranking scale is considered semi-quantitative. A 10-step ranking scale is considered quantitative. The CLSI EP23 guideline shows an example with a semi-quantitative ranking scale, but for we have heard some of the experts recommend qualitative ranking scales when you are just starting out. EP18 is out of step with EP23 in that it recommends a 4-step ranking scale, similar to the specialized Healthcare FMEA (HFMEA tm) created by the VA.
Will the regulators and accreditation agencies mandate a minimum scale? If a laboratory doesn't rank the risk factors by the appropriate scale, will that be a violation. Like the example immediately above, will a laboratory that uses just a binary scale ("Yes this risk happens" vs. "No this risk doesn't happen"), will that still be in compliance?
It would be strange if the forthcoming regulations put into place a sophisticated framework for Risk Analysis, but then dumb the rules down to a simplistic level of Yes/No.
7. What is the minimum acceptable risk score (RPN or criticality)?
Once we've got regulatory guidance on the number of factors and the ranking scales, we can start calculating the risk scores. If we're using 3-factors, our Risk Priority Number (RPN) will be Severity * Occurrence * Detection. Depending on what ranking scale we use (3,4,5 or 10), we might have a range of RPNs from 1 to 1000. If we're using 2-factors, the criticality will be Severity * Occurrence, and that number might range from 1 to 9, 16, 25, or 100. Within each scoring system, the higher the numbers, the worse the score, the more dangerous the hazard.
Once we've got the score, RPN, or criticality, though, we're not done. Out of all the failure modes, we now have to decide which ones are so dangerous we need to take action to fix them. Within the different guidelines, there are few hard-and-fast rules about which score mandates action. One recommendation is that any failure mode with a severity ranking of the highest level (which could be 3,4,5, or 10, depending on the scale) should be addressed. Other systems, like the HFMEA, recommend that any failure mode with a hazard score of 8 or higher needs to be improved.
However, a laboratory might not have the resources to address every failure mode. Or it might decide that their lab, health system, and/or patient population has a higher tolerance for risk, so fewer corrective actions need to be taken. If a laboratory doesn't act on all of its high hazard scores, criticalities, or RPNs, will they still be in compliance?
8. Can a laboratory make use of a "generic" process map, hazard analysis, hazard scoring, etc.?
It is probably already clear that the Risk Analysis process could be time-consuming. Understandably labs will therefore be tempted to send out an email to colleagues at other institutions, "Hey, does anyone have a process map for the XYZ method on the ABC instrument?" As the Risk Analysis requirements settle in, we have no doubt that labs will share their process maps and other documents. On the one hand, it's great to share wisdom. On the other hand, we want labs to makes ure that their Risk analysis reflects the risks in their system, not someone else's lab.
So if a laboratory borrows another laboratory's process map, will that alone be considered a violation? Or if a laboratory uses the same list of Failure Modes as another lab, and doesn't add any modes of their own, doesn't change any of the scoring, will that be considered a violation? In other words, how individual does each Risk Analysis need to be?
What if the manufacturer provides an abbreviated process map, or a list of failure modes that could occur? If a laboratory uses this but changes none of the details, is that a violation. Recall that there was going to be a CLSI guideline on how manufacturers should format and provide Risk Information to laboratories. The EP22 committee, however, voted itself out of existence back in 2010. There was too much opposition from the manufacturers - they didn't like the requirement to provide so much information onf possible risks and errors generated by their processes. So the manufacturer may or may not provide this Risk Information - it may depend on how vocal and demanding their customers are. But even if the manufcaturer does provide limited Risk information, will using that information - without making any changes to reflect the local context - allow the laboratory to be in compliance?
9. Can an inspector or other accreditation official cite or challenge the Risk Analysis of a laboratory?
If we pretend for a moment that all the issues from the previous 8 questions have been resolved, there is a still a great deal of uncertainty. Setting up some rules and regulations about Risk Analysis is hard enough, but how do you inspect these rules to confirm that Risk Analysis is being done correctly?
For example, if a laboratory has drawn a process map, but the inspector knows they are missing a step in the process, are they out of compliance? What if the inspector disagrees with the process map as drawn by the laboratory - is the laboratory out of compliance?
With failure modes, what if the inspector believes that laboratory has not identified all the failure modes that exist? Even if the laboratory has done a good amount of work (hit all their minimum quantitites of time, members, factors and ranking scales), there is still a chance that the inspector will disagree with their Risk Analysis. Perhaps the inspector believes there are more failure modes that need to be identified and addressed, or that the ranking of some existing failure modes is incorrect (i.e. the lab is scoring the failure modes too low or too high).
When the inspector disagrees with any part of the Risk Analysis, is the laboratory out of compliance? ("Bad calculation of criticality - perform new analysis of the risks for this process within 30 days"). In the event of such disagreement, who is The Decider - the inspector or the laboratory director?
In the past, with EQC options, an inspector could not second-guess the laboratory director. That is, if a laboratory director decided an instrument was an "Option 1" EQC device, the inspector could not challenge this decision. It will be crucial to know if the laboratory director and the Risk Analysis committee will again be considered infallible in its decisions, or if the regulations will empower laboratory inspectors to judge the appropriateness of Risk Analysis decisions.
Earlier this year, we held a Risk Analysis workshop where some of these issues emerged. It's one thing to set up a system of recommendations about Risk Analysis. It's quite another thing to turn those guidelines into rules, particularly rules that can be inspected for compliance and implementation. We need to make this easy for a laboratory to implement. We also need to make it possible for an inspector to objectively evaluate the laboratory's implementation. For all the power of Risk Analysis, if CMS constructs a system where laboratories are not clear on the effort they need to expend, and inspectors are not capable of judging the quality of the laboratory effort, we could have a repeat of the EQC problems.
Other resources:
- CLSI EP23A Laboratory QC Based on Risk Analysis, Clinical Laboratory Standards Institute, 940 West Valley Road, Suite 1400, Wayne PA 19087.
- Six Sigma Risk Analysis, 2011, Westgard QC, 7614 Gray Fox Trail, Madison WI 53717
- Westgard Web essays on Risk Analysis
Comments