By Sten Westgard, MS
The economic news in the US and around the world continues to worsen. This week (late January) we are hearing about yet more and larger bank failures, with the possible nationalization of many banks in the UK and the US.
While this may seem unrelated to healthcare and laboratory quality, it's worth remembering that we are currently marching down the path of importing, adopting, and implementing many of the same practices of the banking and financial world that have just revealed their dangerous inadequacy.
I'm talking, among other things, about Risk Management. Currently, there are ISO standards and CLSI guidelines being developed for the laboratory will bring Risk Management into the laboratory. The momentum behind these initiatives makes it hard to ask for a pause or re-evaluation of the merits. But laboratories need to start thinking seriously - do we really want to start using the same techniques that just blew up Wall Street?
In other words, What are the Risks of Risk Management? It's a topic we covered a few years ago. But the utter devastation in the financial world makes it more relevant and urgent to look at the topic of Risk Management again.
Risk Management was and still is all the rage in Wall Street. Several recent articles in the New York Times and New York Times Magazine have discussed the role that Risk Management played in enabling - or possibly even in ameliorating - the unfolding financial crisis. Looking at what happened with Risk Management on Wall Street may help us decide whether or not we can avoid the same mistakes inside healthcare. Or we might decide that Risk Management can indeed help us in our efforts toward better quality.
On the surface, it's hard to argue against the concept of Risk Management. Who wouldn't want to manage their risk? Or try to identify and quantify their risk? But the devil is in the details - what's the process of identifying risks, quantifying them, and making judgments based on those results?
In Wall Street, one of the key Risk Management statistics is called Value at Risk (VaR for short). It's group of mathematic models developed by various trading houses to state the possible losses of a financial portfolio. VaR was first developed by JPMorgan, then adopted by nearly every other trading firm. It has been the statistic of choice, calculated for individual traders but also entire firms. At the close of every market day, it's possible to know the VaR and get a numeric dollar representation of financial risk. Sounds great, right?
In healthcare, there isn't a VaR. Instead we have a related group of Risk Management techniques such as FMEA, FRACAS, probabilistic risk assessment (PRA), etc.
Unfortunately, VaR turns out to have short-comings, including an Achilles heel:
"'Risk modeling didn't help as much as it
should have, ' says Aaron Brown, a former risk manager at Morgan Stanley who
now works at AQR, a big quant-oriented hedge fund. A risk consultant named Marc
Groz says, 'VaR is a very limited tool.' David Einhorn, who founded Greenlight
Capital, a prominent hedge fund, wrote not long ago that VaR was 'relatively
useless as a risk-management tool and potentially catastrophic when its use
creates a false sense of security among senior managers and watchdogs. This is
like an air bag that works all the time, except when you have a car accident.'
Nassim Nicholas Taleb, the best-selling author of The Black Swan, has crusaded
against VaR for more than a decade. He calls it, flatly, 'a fraud.'"
[Source: Joe
Nocera, "Risk Mismanagement," New York Times Magazine, January 4, 2009]
The major problem with a lot of risk models comes from the fact that they are created by humans. Nassim Nicholas Taleb puts it bluntly:
"Wall Street risk models, no matter how mathematically sophisticated, are bogus....And the essential reason is that the greatest risks are never the ones you can see and measure, but the ones you can't see and therefore can never measure. the ones that seem so far outside the boundary of normal probability that you can't imagine they could happen in your lifetime - even though, of course, they do happen, more often than you care to realize. Devastating hurricanes happen. Earthquakes happen. And once in a great while, huge financial catastrophes happen. Catastrophes that risk models always manage to miss." [Source: Ibid.]
Call it the Rumsfeld Effect - it's those "unknown unknowns" that are the most dangerous.
Worse still, even as we are unable to imagine some of the most dangerous risks, we are highly susceptible to underestimating the risks that we can imagine. The cost, staffing, and resource pressures all conspire to make it harder to recognize and address a real danger; it's always more profitable, efficient, and convenient to ignore in the short term:
"Our financial catastrophe, like Bernard Madoff’s
pyramid scheme, required all sorts of important, plugged-in people to sacrifice
our collective long-term interests for short-term gain. The pressure to do this
in today’s financial markets is immense. Obviously the greater the market
pressure to excel in the short term, the greater the need for pressure from
outside the market to consider the longer term. But that’s the problem: there
is no longer any serious pressure from outside the market. The tyranny of the
short term has extended itself with frightening ease into the entities that
were meant to, one way or another, discipline Wall Street, and force it to
consider its enlightened self-interest."
[Source: Michael Lewis and David Einhorn,
"The End of the Finanical World as We Know It", The New York Times,
January 3, 2009]
Laboratories are subject to similar influences on their practices when highly regarded organizations such as ISO, CLSI, CMS, and FDA endorse a new approach such as Risk Management. When professional organizations jump on board, it almost becomes a race to see who can become the “early adopter” and leader in this new area. While everyone’s intentions are good, the outcome could still be bad, particularly when a new approach provides a reason for discontinuing or eliminating an older “traditional” approach. In this context, we should exercise some caution with the new CLSI guidelines for use of Risk Assessment for laboratory quality control. Those documents (EP18, EP22, EP23) are expected to be published soon and laboratories should proceed with the same care as when adopting any other new approach or technology.
Despite the now-obvious failure of Risk Management, and the even worse failure of management and regulation, we are hearing about an even greater emphasis being placed on Risk Management in today's business and financial world. Now that the catastrophe has occurred, businesses feel more pressure than ever to assess and manage their risks. Who can blame them?
Part of the continued faith in Risk Management is a separation of the model and the man. Take Gregg Berman of RiskMetrics, the Risk-Management consulting firm that is the descendant of the original creators of VaR:
"'Obviously, we are big proponents of risk models,' he said. 'But a computer does not do risk modeling. People do it. And people got overzealous and they stopped being careful. They took on too much leverage. And whether they had models that missed that, or they weren't paying enough attention, I don't know. But I do think that this was much more a failure of management than of risk management. I think blaming models for this would be very unfortunate because you are placing blame on a mathematical equation. You can't blame math,' he added with some exasperation." [Source: Nocera, op. cit.]
Mr. Berman has a valid point. It's not the fault of an equation that Wall Street failed. It's the way that boards and chief executives treated those numbers, gamed the data inputs and massaged the answers, or placed undue faith in numbers and models. If all that Risk Management produces is a number, the real key is figuring out how to ensure that the people make the right interpretation of that number. For companies that tweaked their data to skew results - Garbage in, Garbage out. For those who forget that Risk Models are built on all-too-human assessments of possible futures, the complacency and misplaced faith came back to haunt them.
With every equation, statistic or tool, the potential for misuse exists. We have described several times the problems with the wrong implementation of quality control practices as well as the wrong way to apply "Westgard Rules." Certainly, we don't believe that just because a laboratory might misuse the "Westgard Rules," therefore the rules must be thrown out. The same can be said with Sigma-metrics: we've detailed in several essays (see here and here) that an exclusive reliance on a single Sigma-metric assessment of a method may be unwarranted. But the possibility that someone could distort the quality of their laboratory by the incorrect use of Sigma-metrics doesn't mean that Six Sigma has to be eliminated from the laboratory. In the same way, Risk Management can be useful, if it is properly assessed, calculated and interpreted. But if abuse and misuse is chronic and correction unachievable, then we must consider the technique impractical for laboratories and find another solution.
To return to the terms more specific to the laboratory: FMEA, FRACAS and other Risk Management tools can be quite helpful. But the techniques must be executed and interpreted correctly. For laboratories that can't make mature judgments about their risks, however, or refuse to devote the proper resources to risk mitigation, Risk Management has the potential to provide false comfort.
Somehow, we need to find a middle ground, a better way to assess the benefits and hazards of Risk Management, or any management technique, for that matter. Beyond that, we also need to coldly assess our ability to properly assess and act upon the information produced by Risk Management.
That's where I think the laboratory actually has an advantage over most other organizations. When it comes to judging Risk Management, many businesses tend to enshrine numbers as golden idols, but laboratories know that numerical results aren't as perfect as they seem. Laboratory professionals understand that numbers have variation and bias. They know that every number comes with some uncertainty, even numbers derived from Risk Assessments.
Laboratories are probably better suited to judge the strengths and vulnerabilities of Risk Management tools because they make similar judgments about tests and different methods on a regular basis. They know that all the tests produce numbers, but some numbers are better than others. When results don't reflect the real condition of the patient, those methods are counter-productive. Likewise, if Risk Management gives a laboratory a false sense of security, something needs to be done.
Ultimately, each laboratory will have to decide if they can do Risk Management properly, or if they need to mitigate the risks of Risk Management with additional quality management techniques.
Comments