We see the Bail Outs of the bankers and wall street. We see the cutbacks and austerity of governments in Greece and Ireland as their governments struggle to make good on the debts run up by their out-of-control banks.Private risks made into Public losses.
But is the laboratory immune from the problem of "Too Big to Fail"?
One of the least popular arguments being made to the public is that governments - and ultimately taxpayers - must make up for the bad loans, risky investments, and reckless behavior of bankers, traders, subprime mortgage brokers and other disreputable characters. Why should the public pay for private failures? The argument is that the companies, banks, and institutions are "Too Big to Fail" - that if they were allowed to fail, the panic that would follow could roil the financial markets and cause a ruinous domino effect. In other words, if we let Lehman Brothers fail, all that does is start a "run on the banks" effect. The damage from irrational panic, it's argued, could be worse and more widespread, than the cost of simply backing up the bad debts of a few irresponsible banks.
Instruments, it can be argued, are moving toward that point of being Too Big to Fail. As the size and cost of instruments rise, it becomes harder and harder to reject a partial failure of the device.
"Menu" is a key driver of instrumentation. While instruments a few decades ago might only have had a dozen methods, today's instruments can pack over 100 different tests into one big box. As the number of tests in one instrument goes up, the importance of any one method goes down. On an instrument with 100 methods, when one method is poor, the weight of the 99 "good" methods outweighs the problems with the single method. Most laboratories won't throw away a multitest instrument because of just one bad test.
The problem can actually be exacerbated by automation systems. Now, in addition to an expensive instrument, an equally or even more expensive automation track may be involved. So the problem of replacing a test method may mean not only replacing an instrument, but also replacing or reconfiguring an automation system connected to that instrument. Ironically, automation systems can pose huge obstacles to change in the laboratory. Sometimes an elaborate automation track can effectively lock a laboratory into a set of instruments, methods, and testing practices. The expense of changing to a new workflow becomes prohibitive.
So what happens when Instruments "Too Big to Fail" does in fact Fail?
As previously mentioned, in the financial sector, the failing banks, trading firms, etc., have been bailed out by governments around the world. In some cases, private lenders had to absorb the losses. But in many other cases, the public taxpayers are making good those private debts.
The objection to this bail out is that private failures are being paid for by public monies. Of course, in the good times, success in the form of private profits remained in private hands. The public taxpayers are only seeing the downside of the financial roller coaster. It is a reverse Robin Hood type of socialism. Profits are privatized; losses are socialized.
In addition to the cost of making the public pay for a particular private failure, this type of arrangement creates an additional situation called "Moral Hazard." When people who take risks are rewarded when the risk leads to success but suffer no consequences when the risk leads to failure, they tend to take more and greater risks. While it's debatable whether or not the banks and brokers really understood the extent of the risk, or actually knew they would not have to pay for the massive failures, the precedent has now been set for the future. It appears that few in the financial sector have learned any lessons about their behavior. Record bonuses are still being handed out to traders who, just a few years ago, nearly drove the world economy off a cliff.
How does this "Moral Hazard" play out in the laboratory? If instrument manufacturers sell a bad method, but then customers are unable to change from that bad method, the laboratory "absorbs" the poor performance. Ultimately, the patient - again the public - is the one most impacted by those poor test results. And, in the absence of any consequences (loss of customers), manufacturers read the market signal as, "poor methods are tolerated, thus, it is not as important to develop and offer better methods." Manufacturers appear to be rewarded for cutting corners and producing cheaper, poorer methods. Thus the apparent market message drives manufacturers to reduce quality in their diagnostic offerings. The real message, however, is that manufacturers have more power in the relationship with their customers - since it's so hard to change methods, instruments, and automation systems - and can essentially force customers to accept poor methods.
The possible saving grace in this scenario would have been an active and aggressive government intervention. In our financial crisis, an energetic watchdog agency could have stepped in and prevented banks from engaging in risky subprime loans, for example. Instead, government agencies and private ratings agencies took a laissez faire approach, or in some respects, aided and abetted the problem by giving tacit or explicit approval to risky practices (i.e. giving triple-A ratings to securities of pooled subprime mortgages). Again, in parallel with the financial sector, method performance could be assured by the intervention by government or independent agencies; an engaged FDA or CMS could assert the importance of method performance in device approval and validation (note: the FDA currently doesn't assess anything beyond truth in labeling; CMS seems to be dedicated to dumbing-down QC with the scientifically unjustifiable "Equivalent QC" practices). Our professional organizations could also take a stand, but in practice, they usually adopt a vendor-neutral approach. Unfortunately, being vendor-neutral often leads to being performance-neutral.
How do we in the laboratory guard against "Too Big to Fail" instruments?
First, we monitor the quality of individual tests. This could mean many different types of assessment, including Sigma-metric assessment, as well as QC Design. Start by defining the quality required by each test. Use performance measurements of imprecision and bias to assess whether or not the test method is hitting that target. Then follow up those results by optimizing the QC procedures based on that performance. This step allows us to know where any failures are occurring, as well as the extent of the problem, if there is a problem.
Second, remember the laboratory isn't powerless in its ability to handle a poor method. In the worst case scenario, a bad method can be improved by replicate measurements, multiple specimens and/or adjusting the clinical use of the test to account for the poor performance. None of these options may be particularly palatable for a health system. It's not cheap to have to run tests multiple times to get a single useful result. Your clinicians won't be happy if you inform them that they have to change how they interpret the test results for a method. But these are valid ways to adjust for poor performance.
Third, and this may be an easy observation to make in hindsight, the relationship between the laboratory and the instrument manufacturer should not completely shift the burden of method failure onto the laboratory. During the negotiation process on the purchase of a new instrument, provisions should be made for the possibility of method failure, perhaps stipulating that the manufacturer must cover any additional testing expenses in the event that one of their methods proves to be subpar. When a laboratory purchases a method from a vendor, some level of risk sharing should occur, something that should be explicitly spelled out in the contract.
Finally, don't be afraid to change methods. When a method is bad and it can't be improved, worked-around, compensated for, or accommodated, then it's time to get a new method. Yes, it's probably a hassle to get a new method or even a new instrument. But somewhere in the core mission of the laboratory is the idea that the test results generated by the lab should be accurate. If a bad method violates the core mission of the laboratory, either the method or the mission must go. The laboratory, one way or another, is making that choice.
Comments