What events caused legendary comedian Joan Rivers to die while undergoing an endoscopic procedure? While there has been a lot of speculation surrounding the death of Ms. Rivers on August 28 at Yorkville Endoscopy (of which I have no particular inside knowledge), the various theories serve to illustrate the phenomenon of outcome bias in medicine, and in particular, in healthcare systems. Of course, it is not impossible that the clinic, staff, specialists, equipment, decision making, and timeliness of action were all without flaws. Sometimes despite all this, patients still suffer adverse outcomes. However, that seems unlikely in this case, since Dr. Cohen has stepped down from his leadership role at Yorkville. So, let’s discuss how flawed thinking about outcomes, quality, and safety can contribute to devastating events. First, many have wondered whether an anesthesiologist was involved. So far, it has neither been confirmed nor denied, but it is increasingly common for endoscopy centers to operate without anesthesiologists, and even to consider giving propofol sedation with a robot. The arguments for doing so are generally that catastrophes such as this are rare, and so the expense of having an expert physician present seems excessive. (I doubt it is ever considered so by a patient whose life is saved by that expert.) The decision to have expert physicians involved in patient care is ultimately a choice made by the health care system, in this case a freestanding procedural center. The executives who make tough choices about when and how to spend their financial resources must weigh risks and benefits, and sometimes decide to gamble, and hope to be lucky rather than committing to high levels of safety.
Here, an anesthesiologist reviews some possible medical scenarios that could have led to Rivers’ death. And here, a Forbes writer suggests that “VIP” treatment – which usually involves some kind of deviation from typical practice to give “special care – may have played a role. This interplay of emotions on medical decision making is called visceral bias, and can take many forms, as I’ve described here.
As Dr. Sibert discusses in her article, if laryngospasm was the cause, it is possible that the necessary treatment (succinylcholine) was not available. This may be due to the fact that stocking succinylcholine also requires the stocking additional emergency supplies for use in the unlikely event a patient requires succinylcholine and then has the double misfourtune of suffering malignant hyperthermia as a result of exposure. These are costs to the center, and although these safety provisions are less costly than full-time anesthesiology services, they represent a tangible expense that -if the clinic is lucky – is unlikely to demonstrate a tangible return on investment.
In both cases, the general underlying psychology is that “we’ve always done it this way, and we’ve never had a problem”. For those interested in a statistical discussion, see the paper from JAMA “If Nothing Goes Wrong, Is Everything Alright?”). The basic premise here is that a finding of “zero” events in some number of observations “n” is often interpreted with a qualitative impact far in excess of its quantitative meaning. The authors propose several reasons for this, including:
- People tend to ignore the size of the denominators on which rates are based. For instance, a rate of 10% is given the same weight whether it is observed in 20 or 200 cases.
- People tend to focus on numerators. Perhaps a zero numerator carries suggests (falsely) that an event is impossible. Therefore, when faced with a theoretical risk that has not yet manifested – in other words, when something possible has not yet happened – people may tend to expect that it cannot happen in the future.
Outcome bias, as I’ve written before here, is judging decisions based upon outcome alone (or as discussed above, the numerator), rather than the quality of the thinking at the time the decision is made (taking into account the possibility of an adverse event). I’ve illustrated this with the easy concept that a drunk driver who so happens to make it home safely, without killing himself or others, is nonetheless making a reckless decision. Using anesthesia medications like Propofol without specialists who are expert both in its use and in management of effects (like the cessation of breathing, which is extremely common) is reckless. Failure to stock proper emergency equipment is likewise reckless. It is tempting to describe these situations as “accidents waiting to happen”, except they are not accidents.
Why would the medical establishment behave so reckleslessly? Generally, the normalization of risky behavior and deviance from best practice (as with the space shuttle Challenger explosion) occurs in a gradual, stepwise fashion which would never to be tolerated in a single leap. But gradual increases in behavior that is at least theoretically risky, and then demonstrated to be at least moderately probable to occur – because it does occur with “near misses”, close calls, and occasional real events that are dismissed as bad luck – somehow evolve to be accepted as routine. Today in medicine, there is an enormous quest to cut costs, increase efficiency, minimize specialists, and even marginalize physician care whenever possible, declare screening and diagnostic testing “unnecessary” after the results are known (see hindsight bias), and so on.
All gains come with tradeoffs, and lower costs will often come with a reduced safety margin. Unfortunately, high profile celebrity deaths are sometimes the most powerful catalysts to bring such conversations to the forefront. So the question arises, from a public health perspective: how much safety can we afford?
In your practice, do colleagues say “we’ve always done it this way, and we’ve never had a problem” as evidence to support a potentially unsafe practice? Tell me about it the comments or drop me an email.